Test Report: Hyperkit_macOS 19373

                    
                      afa0c1cf199b27e59d48f8572184259dc9d34cb2:2024-08-05:35664
                    
                

Test fail (26/227)

x
+
TestOffline (195.33s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-642000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p offline-docker-642000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : exit status 80 (3m9.918322746s)

                                                
                                                
-- stdout --
	* [offline-docker-642000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "offline-docker-642000" primary control-plane node in "offline-docker-642000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "offline-docker-642000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:50:51.647097    6084 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:50:51.647312    6084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:50:51.647317    6084 out.go:304] Setting ErrFile to fd 2...
	I0805 16:50:51.647321    6084 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:50:51.647522    6084 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:50:51.649352    6084 out.go:298] Setting JSON to false
	I0805 16:50:51.674907    6084 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":4822,"bootTime":1722897029,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0805 16:50:51.675006    6084 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:50:51.731239    6084 out.go:177] * [offline-docker-642000] minikube v1.33.1 on Darwin 14.5
	I0805 16:50:51.780406    6084 notify.go:220] Checking for updates...
	I0805 16:50:51.817501    6084 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:50:51.839584    6084 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:50:51.865223    6084 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0805 16:50:51.886950    6084 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:50:51.908117    6084 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:50:51.929150    6084 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:50:51.950223    6084 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:50:51.978034    6084 out.go:177] * Using the hyperkit driver based on user configuration
	I0805 16:50:52.020192    6084 start.go:297] selected driver: hyperkit
	I0805 16:50:52.020218    6084 start.go:901] validating driver "hyperkit" against <nil>
	I0805 16:50:52.020254    6084 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:50:52.024703    6084 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:50:52.024818    6084 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19373-1122/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0805 16:50:52.032889    6084 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0805 16:50:52.036823    6084 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:50:52.036841    6084 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0805 16:50:52.036875    6084 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 16:50:52.037106    6084 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:50:52.037133    6084 cni.go:84] Creating CNI manager for ""
	I0805 16:50:52.037150    6084 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 16:50:52.037157    6084 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 16:50:52.037230    6084 start.go:340] cluster config:
	{Name:offline-docker-642000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-642000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:50:52.037316    6084 iso.go:125] acquiring lock: {Name:mk71e8d40232ece83c91dc82184f03ab93aee56e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:50:52.084102    6084 out.go:177] * Starting "offline-docker-642000" primary control-plane node in "offline-docker-642000" cluster
	I0805 16:50:52.105294    6084 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:50:52.105376    6084 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0805 16:50:52.105417    6084 cache.go:56] Caching tarball of preloaded images
	I0805 16:50:52.105678    6084 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:50:52.105699    6084 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:50:52.106237    6084 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/offline-docker-642000/config.json ...
	I0805 16:50:52.106286    6084 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/offline-docker-642000/config.json: {Name:mk01c175fbf0693a0f28b422dd03be320ca1ac60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:50:52.106889    6084 start.go:360] acquireMachinesLock for offline-docker-642000: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:50:52.107006    6084 start.go:364] duration metric: took 87.502µs to acquireMachinesLock for "offline-docker-642000"
	I0805 16:50:52.107054    6084 start.go:93] Provisioning new machine with config: &{Name:offline-docker-642000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-642000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:50:52.107136    6084 start.go:125] createHost starting for "" (driver="hyperkit")
	I0805 16:50:52.128123    6084 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0805 16:50:52.128279    6084 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:50:52.128320    6084 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:50:52.137045    6084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53630
	I0805 16:50:52.137445    6084 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:50:52.137854    6084 main.go:141] libmachine: Using API Version  1
	I0805 16:50:52.137864    6084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:50:52.138295    6084 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:50:52.138430    6084 main.go:141] libmachine: (offline-docker-642000) Calling .GetMachineName
	I0805 16:50:52.138522    6084 main.go:141] libmachine: (offline-docker-642000) Calling .DriverName
	I0805 16:50:52.138649    6084 start.go:159] libmachine.API.Create for "offline-docker-642000" (driver="hyperkit")
	I0805 16:50:52.138673    6084 client.go:168] LocalClient.Create starting
	I0805 16:50:52.138713    6084 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem
	I0805 16:50:52.138771    6084 main.go:141] libmachine: Decoding PEM data...
	I0805 16:50:52.138785    6084 main.go:141] libmachine: Parsing certificate...
	I0805 16:50:52.138854    6084 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem
	I0805 16:50:52.138893    6084 main.go:141] libmachine: Decoding PEM data...
	I0805 16:50:52.138904    6084 main.go:141] libmachine: Parsing certificate...
	I0805 16:50:52.138919    6084 main.go:141] libmachine: Running pre-create checks...
	I0805 16:50:52.138928    6084 main.go:141] libmachine: (offline-docker-642000) Calling .PreCreateCheck
	I0805 16:50:52.139011    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:50:52.139208    6084 main.go:141] libmachine: (offline-docker-642000) Calling .GetConfigRaw
	I0805 16:50:52.149149    6084 main.go:141] libmachine: Creating machine...
	I0805 16:50:52.149162    6084 main.go:141] libmachine: (offline-docker-642000) Calling .Create
	I0805 16:50:52.149302    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:50:52.149417    6084 main.go:141] libmachine: (offline-docker-642000) DBG | I0805 16:50:52.149286    6105 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:50:52.149481    6084 main.go:141] libmachine: (offline-docker-642000) Downloading /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1122/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 16:50:52.622841    6084 main.go:141] libmachine: (offline-docker-642000) DBG | I0805 16:50:52.622769    6105 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/id_rsa...
	I0805 16:50:52.765289    6084 main.go:141] libmachine: (offline-docker-642000) DBG | I0805 16:50:52.765228    6105 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/offline-docker-642000.rawdisk...
	I0805 16:50:52.765307    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Writing magic tar header
	I0805 16:50:52.765319    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Writing SSH key tar header
	I0805 16:50:52.766372    6084 main.go:141] libmachine: (offline-docker-642000) DBG | I0805 16:50:52.766301    6105 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000 ...
	I0805 16:50:53.222424    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:50:53.222449    6084 main.go:141] libmachine: (offline-docker-642000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/hyperkit.pid
	I0805 16:50:53.222460    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Using UUID f3696461-7c9d-407a-83e3-8de61fa735e7
	I0805 16:50:53.503516    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Generated MAC ba:c0:40:dc:42:f1
	I0805 16:50:53.503537    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-642000
	I0805 16:50:53.503580    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:50:53 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f3696461-7c9d-407a-83e3-8de61fa735e7", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2270)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0805 16:50:53.503615    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:50:53 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f3696461-7c9d-407a-83e3-8de61fa735e7", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2270)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0805 16:50:53.503709    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:50:53 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f3696461-7c9d-407a-83e3-8de61fa735e7", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/offline-docker-642000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/bzimage,
/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-642000"}
	I0805 16:50:53.503754    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:50:53 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f3696461-7c9d-407a-83e3-8de61fa735e7 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/offline-docker-642000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machi
nes/offline-docker-642000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-642000"
	I0805 16:50:53.503764    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:50:53 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:50:53.507021    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:50:53 DEBUG: hyperkit: Pid is 6132
	I0805 16:50:53.507660    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 0
	I0805 16:50:53.507677    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:50:53.507736    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:50:53.508839    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for ba:c0:40:dc:42:f1 in /var/db/dhcpd_leases ...
	I0805 16:50:53.508977    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:50:53.508991    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:50:53.509020    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:50:53.509037    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:50:53.509053    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:50:53.509079    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:50:53.509095    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:50:53.509121    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:50:53.509138    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:50:53.509152    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:50:53.509168    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:50:53.509181    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:50:53.509194    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:50:53.509206    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:50:53.509219    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:50:53.509231    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:50:53.509243    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:50:53.509260    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:50:53.514571    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:50:53 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:50:53.568744    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:50:53 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:50:53.586669    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:50:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:50:53.586691    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:50:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:50:53.586698    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:50:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:50:53.586704    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:50:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:50:53.964237    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:50:53 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:50:53.964266    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:50:53 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:50:54.079130    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:50:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:50:54.079150    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:50:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:50:54.079159    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:50:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:50:54.079170    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:50:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:50:54.080007    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:50:54 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:50:54.080016    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:50:54 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:50:55.510021    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 1
	I0805 16:50:55.510035    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:50:55.510104    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:50:55.510881    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for ba:c0:40:dc:42:f1 in /var/db/dhcpd_leases ...
	I0805 16:50:55.510934    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:50:55.510953    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:50:55.510963    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:50:55.510979    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:50:55.510995    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:50:55.511006    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:50:55.511027    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:50:55.511036    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:50:55.511042    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:50:55.511059    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:50:55.511073    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:50:55.511102    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:50:55.511117    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:50:55.511129    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:50:55.511137    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:50:55.511146    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:50:55.511156    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:50:55.511182    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:50:57.511890    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 2
	I0805 16:50:57.511907    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:50:57.512010    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:50:57.512824    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for ba:c0:40:dc:42:f1 in /var/db/dhcpd_leases ...
	I0805 16:50:57.512857    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:50:57.512884    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:50:57.512897    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:50:57.512913    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:50:57.512926    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:50:57.512937    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:50:57.512944    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:50:57.512950    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:50:57.512957    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:50:57.512972    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:50:57.512980    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:50:57.512988    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:50:57.512996    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:50:57.513002    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:50:57.513010    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:50:57.513022    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:50:57.513032    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:50:57.513040    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:50:59.485207    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:50:59 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:50:59.485337    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:50:59 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:50:59.485346    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:50:59 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:50:59.505479    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:50:59 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:50:59.513982    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 3
	I0805 16:50:59.513994    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:50:59.514046    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:50:59.515049    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for ba:c0:40:dc:42:f1 in /var/db/dhcpd_leases ...
	I0805 16:50:59.515146    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:50:59.515200    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:50:59.515215    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:50:59.515231    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:50:59.515241    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:50:59.515253    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:50:59.515266    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:50:59.515277    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:50:59.515286    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:50:59.515293    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:50:59.515301    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:50:59.515308    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:50:59.515314    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:50:59.515328    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:50:59.515337    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:50:59.515344    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:50:59.515351    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:50:59.515366    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:51:01.516432    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 4
	I0805 16:51:01.516446    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:01.516542    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:51:01.517370    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for ba:c0:40:dc:42:f1 in /var/db/dhcpd_leases ...
	I0805 16:51:01.517432    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:51:01.517442    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:51:01.517464    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:51:01.517477    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:51:01.517490    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:51:01.517499    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:51:01.517511    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:51:01.517524    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:51:01.517534    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:51:01.517559    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:51:01.517579    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:51:01.517594    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:51:01.517611    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:51:01.517623    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:51:01.517631    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:51:01.517642    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:51:01.517654    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:51:01.517665    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:51:03.519676    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 5
	I0805 16:51:03.519698    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:03.519770    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:51:03.520551    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for ba:c0:40:dc:42:f1 in /var/db/dhcpd_leases ...
	I0805 16:51:03.520609    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:51:03.520619    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:51:03.520628    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:51:03.520636    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:51:03.520648    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:51:03.520656    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:51:03.520668    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:51:03.520680    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:51:03.520693    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:51:03.520703    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:51:03.520710    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:51:03.520716    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:51:03.520722    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:51:03.520728    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:51:03.520745    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:51:03.520759    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:51:03.520780    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:51:03.520791    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:51:05.521888    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 6
	I0805 16:51:05.521901    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:05.521955    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:51:05.522739    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for ba:c0:40:dc:42:f1 in /var/db/dhcpd_leases ...
	I0805 16:51:05.522753    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:51:05.522763    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:51:05.522770    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:51:05.522784    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:51:05.522809    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:51:05.522838    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:51:05.522852    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:51:05.522868    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:51:05.522884    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:51:05.522893    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:51:05.522910    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:51:05.522917    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:51:05.522926    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:51:05.522933    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:51:05.522939    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:51:05.522944    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:51:05.522954    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:51:05.522971    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:51:07.523922    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 7
	I0805 16:51:07.523938    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:07.523948    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:51:07.524818    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for ba:c0:40:dc:42:f1 in /var/db/dhcpd_leases ...
	I0805 16:51:07.524861    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:51:07.524874    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:51:07.524884    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:51:07.524892    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:51:07.524899    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:51:07.524906    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:51:07.524921    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:51:07.524933    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:51:07.524942    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:51:07.524948    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:51:07.524963    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:51:07.524975    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:51:07.524983    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:51:07.524991    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:51:07.524998    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:51:07.525006    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:51:07.525021    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:51:07.525033    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:51:09.525809    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 8
	I0805 16:51:09.525827    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:09.525866    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:51:09.526629    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for ba:c0:40:dc:42:f1 in /var/db/dhcpd_leases ...
	I0805 16:51:09.526681    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:51:09.526690    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:51:09.526709    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:51:09.526717    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:51:09.526723    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:51:09.526744    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:51:09.526757    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:51:09.526765    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:51:09.526776    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:51:09.526786    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:51:09.526799    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:51:09.526809    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:51:09.526817    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:51:09.526824    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:51:09.526839    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:51:09.526849    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:51:09.526857    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:51:09.526865    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:51:11.527106    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 9
	I0805 16:51:11.527121    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:11.527211    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:51:11.527973    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for ba:c0:40:dc:42:f1 in /var/db/dhcpd_leases ...
	I0805 16:51:11.528026    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:51:11.528046    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:51:11.528054    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:51:11.528060    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:51:11.528068    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:51:11.528082    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:51:11.528089    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:51:11.528097    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:51:11.528104    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:51:11.528112    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:51:11.528119    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:51:11.528126    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:51:11.528133    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:51:11.528143    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:51:11.528157    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:51:11.528167    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:51:11.528175    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:51:11.528182    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:51:13.530239    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 10
	I0805 16:51:13.530254    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:13.530264    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:51:13.531036    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for ba:c0:40:dc:42:f1 in /var/db/dhcpd_leases ...
	I0805 16:51:13.531089    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:51:13.531107    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:51:13.531129    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:51:13.531137    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:51:13.531144    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:51:13.531149    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:51:13.531155    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:51:13.531161    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:51:13.531167    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:51:13.531172    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:51:13.531180    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:51:13.531189    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:51:13.531197    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:51:13.531205    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:51:13.531211    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:51:13.531219    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:51:13.531230    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:51:13.531238    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:51:15.533020    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 11
	I0805 16:51:15.533054    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:15.533132    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:51:15.533902    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for ba:c0:40:dc:42:f1 in /var/db/dhcpd_leases ...
	I0805 16:51:15.533962    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:51:15.533972    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:51:15.533980    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:51:15.533990    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:51:15.533995    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:51:15.534001    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:51:15.534007    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:51:15.534029    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:51:15.534041    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:51:15.534050    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:51:15.534059    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:51:15.534073    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:51:15.534081    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:51:15.534097    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:51:15.534109    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:51:15.534117    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:51:15.534125    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:51:15.534147    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:51:17.535511    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 12
	I0805 16:51:17.535526    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:17.535654    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:51:17.536418    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for ba:c0:40:dc:42:f1 in /var/db/dhcpd_leases ...
	I0805 16:51:17.536457    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:51:17.536468    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:51:17.536477    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:51:17.536483    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:51:17.536490    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:51:17.536495    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:51:17.536501    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:51:17.536509    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:51:17.536520    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:51:17.536530    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:51:17.536546    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:51:17.536560    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:51:17.536576    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:51:17.536589    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:51:17.536601    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:51:17.536609    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:51:17.536616    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:51:17.536623    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:51:19.537573    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 13
	I0805 16:51:19.537588    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:19.537653    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:51:19.538507    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for ba:c0:40:dc:42:f1 in /var/db/dhcpd_leases ...
	I0805 16:51:19.538561    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:51:19.538573    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:51:19.538584    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:51:19.538595    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:51:19.538613    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:51:19.538620    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:51:19.538627    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:51:19.538635    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:51:19.538653    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:51:19.538665    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:51:19.538672    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:51:19.538680    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:51:19.538690    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:51:19.538698    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:51:19.538704    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:51:19.538711    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:51:19.538717    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:51:19.538725    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:51:21.538770    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 14
	I0805 16:51:21.538783    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:21.538941    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:51:21.539818    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for ba:c0:40:dc:42:f1 in /var/db/dhcpd_leases ...
	I0805 16:51:21.539863    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:51:21.539875    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:51:21.539905    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:51:21.539916    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:51:21.539923    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:51:21.539929    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:51:21.539940    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:51:21.539949    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:51:21.539966    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:51:21.539979    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:51:21.539987    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:51:21.539993    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:51:21.540005    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:51:21.540018    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:51:21.540026    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:51:21.540044    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:51:21.540051    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:51:21.540058    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:51:23.540435    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 15
	I0805 16:51:23.540450    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:23.540590    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:51:23.541411    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for ba:c0:40:dc:42:f1 in /var/db/dhcpd_leases ...
	I0805 16:51:23.541466    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:51:23.541480    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:51:23.541497    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:51:23.541512    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:51:23.541519    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:51:23.541527    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:51:23.541533    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:51:23.541545    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:51:23.541561    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:51:23.541569    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:51:23.541575    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:51:23.541585    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:51:23.541595    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:51:23.541604    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:51:23.541615    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:51:23.541622    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:51:23.541630    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:51:23.541638    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:51:25.543658    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 16
	I0805 16:51:25.543673    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:25.543729    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:51:25.544501    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for ba:c0:40:dc:42:f1 in /var/db/dhcpd_leases ...
	I0805 16:51:25.544554    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:51:25.544564    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:51:25.544572    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:51:25.544580    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:51:25.544594    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:51:25.544605    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:51:25.544624    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:51:25.544639    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:51:25.544656    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:51:25.544664    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:51:25.544680    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:51:25.544692    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:51:25.544700    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:51:25.544708    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:51:25.544722    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:51:25.544733    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:51:25.544742    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:51:25.544748    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:51:27.546766    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 17
	I0805 16:51:27.546787    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:27.546858    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:51:27.547753    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for ba:c0:40:dc:42:f1 in /var/db/dhcpd_leases ...
	I0805 16:51:27.547790    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:51:27.547803    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:51:27.547818    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:51:27.547835    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:51:27.547847    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:51:27.547855    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:51:27.547876    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:51:27.547889    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:51:27.547905    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:51:27.547916    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:51:27.547934    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:51:27.547943    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:51:27.547951    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:51:27.547959    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:51:27.547966    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:51:27.547976    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:51:27.547985    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:51:27.547992    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:51:29.550006    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 18
	I0805 16:51:29.550021    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:29.550122    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:51:29.551238    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for ba:c0:40:dc:42:f1 in /var/db/dhcpd_leases ...
	I0805 16:51:29.551277    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:51:29.551294    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:51:29.551306    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:51:29.551322    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:51:29.551336    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:51:29.551351    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:51:29.551359    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:51:29.551366    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:51:29.551373    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:51:29.551380    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:51:29.551388    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:51:29.551395    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:51:29.551402    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:51:29.551409    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:51:29.551416    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:51:29.551429    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:51:29.551437    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:51:29.551452    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:51:31.553471    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 19
	I0805 16:51:31.553484    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:31.553563    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:51:31.554488    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for ba:c0:40:dc:42:f1 in /var/db/dhcpd_leases ...
	I0805 16:51:31.554541    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:51:31.554554    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:51:31.554563    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:51:31.554570    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:51:31.554585    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:51:31.554596    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:51:31.554609    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:51:31.554618    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:51:31.554634    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:51:31.554647    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:51:31.554655    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:51:31.554663    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:51:31.554670    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:51:31.554676    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:51:31.554688    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:51:31.554697    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:51:31.554704    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:51:31.554713    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:51:33.556719    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 20
	I0805 16:51:33.556735    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:33.556832    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:51:33.557683    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for ba:c0:40:dc:42:f1 in /var/db/dhcpd_leases ...
	I0805 16:51:33.557726    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:51:33.557738    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:51:33.557747    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:51:33.557754    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:51:33.557761    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:51:33.557766    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:51:33.557794    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:51:33.557808    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:51:33.557822    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:51:33.557830    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:51:33.557837    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:51:33.557844    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:51:33.557851    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:51:33.557871    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:51:33.557884    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:51:33.557893    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:51:33.557907    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:51:33.557924    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:51:35.559334    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 21
	I0805 16:51:35.559348    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:35.559412    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:51:35.560182    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for ba:c0:40:dc:42:f1 in /var/db/dhcpd_leases ...
	I0805 16:51:35.560233    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:51:35.560244    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:51:35.560258    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:51:35.560273    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:51:35.560294    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:51:35.560301    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:51:35.560307    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:51:35.560315    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:51:35.560328    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:51:35.560339    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:51:35.560356    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:51:35.560364    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:51:35.560372    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:51:35.560389    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:51:35.560397    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:51:35.560406    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:51:35.560413    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:51:35.560418    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:51:37.562404    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 22
	I0805 16:51:37.562421    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:37.562507    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:51:37.563351    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for ba:c0:40:dc:42:f1 in /var/db/dhcpd_leases ...
	I0805 16:51:37.563409    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:51:37.563422    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:51:37.563435    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:51:37.563444    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:51:37.563451    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:51:37.563459    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:51:37.563481    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:51:37.563498    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:51:37.563510    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:51:37.563527    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:51:37.563537    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:51:37.563546    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:51:37.563554    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:51:37.563561    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:51:37.563569    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:51:37.563577    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:51:37.563590    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:51:37.563600    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:51:39.563718    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 23
	I0805 16:51:39.563731    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:39.563783    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:51:39.564599    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for ba:c0:40:dc:42:f1 in /var/db/dhcpd_leases ...
	I0805 16:51:39.564607    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:51:39.564616    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:51:39.564623    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:51:39.564636    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:51:39.564651    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:51:39.564658    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:51:39.564666    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:51:39.564684    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:51:39.564698    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:51:39.564718    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:51:39.564728    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:51:39.564735    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:51:39.564741    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:51:39.564753    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:51:39.564765    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:51:39.564775    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:51:39.564786    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:51:39.564796    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:51:41.566104    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 24
	I0805 16:51:41.566130    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:41.566213    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:51:41.567007    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for ba:c0:40:dc:42:f1 in /var/db/dhcpd_leases ...
	I0805 16:51:41.567032    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:51:41.567039    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:51:41.567050    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:51:41.567079    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:51:41.567096    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:51:41.567107    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:51:41.567124    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:51:41.567133    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:51:41.567145    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:51:41.567159    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:51:41.567171    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:51:41.567180    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:51:41.567189    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:51:41.567196    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:51:41.567204    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:51:41.567211    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:51:41.567218    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:51:41.567226    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:51:43.567857    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 25
	I0805 16:51:43.567870    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:43.567963    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:51:43.568729    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for ba:c0:40:dc:42:f1 in /var/db/dhcpd_leases ...
	I0805 16:51:43.568777    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:51:43.568787    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:51:43.568797    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:51:43.568808    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:51:43.568819    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:51:43.568829    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:51:43.568850    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:51:43.568860    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:51:43.568867    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:51:43.568875    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:51:43.568882    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:51:43.568889    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:51:43.568897    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:51:43.568905    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:51:43.568912    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:51:43.568918    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:51:43.568937    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:51:43.568945    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:51:45.569738    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 26
	I0805 16:51:45.569755    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:45.569812    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:51:45.570568    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for ba:c0:40:dc:42:f1 in /var/db/dhcpd_leases ...
	I0805 16:51:45.570618    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:51:45.570642    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:51:45.570660    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:51:45.570670    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:51:45.570677    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:51:45.570686    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:51:45.570697    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:51:45.570706    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:51:45.570712    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:51:45.570718    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:51:45.570729    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:51:45.570737    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:51:45.570745    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:51:45.570754    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:51:45.570762    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:51:45.570771    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:51:45.570787    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:51:45.570800    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:51:47.572903    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 27
	I0805 16:51:47.572919    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:47.572987    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:51:47.573800    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for ba:c0:40:dc:42:f1 in /var/db/dhcpd_leases ...
	I0805 16:51:47.573905    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:51:47.573915    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:51:47.573929    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:51:47.573938    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:51:47.573965    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:51:47.573973    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:51:47.573982    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:51:47.573990    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:51:47.573997    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:51:47.574002    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:51:47.574009    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:51:47.574023    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:51:47.574030    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:51:47.574037    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:51:47.574046    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:51:47.574054    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:51:47.574061    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:51:47.574069    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:51:49.575770    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 28
	I0805 16:51:49.575781    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:49.575847    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:51:49.577060    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for ba:c0:40:dc:42:f1 in /var/db/dhcpd_leases ...
	I0805 16:51:49.577104    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:51:49.577117    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:51:49.577126    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:51:49.577134    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:51:49.577141    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:51:49.577148    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:51:49.577155    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:51:49.577162    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:51:49.577168    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:51:49.577174    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:51:49.577189    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:51:49.577198    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:51:49.577209    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:51:49.577217    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:51:49.577224    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:51:49.577232    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:51:49.577240    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:51:49.577246    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:51:51.578091    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 29
	I0805 16:51:51.578112    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:51.578202    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:51:51.579061    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for ba:c0:40:dc:42:f1 in /var/db/dhcpd_leases ...
	I0805 16:51:51.579099    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:51:51.579111    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:51:51.579121    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:51:51.579134    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:51:51.579143    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:51:51.579152    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:51:51.579158    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:51:51.579169    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:51:51.579177    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:51:51.579185    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:51:51.579192    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:51:51.579207    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:51:51.579215    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:51:51.579229    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:51:51.579243    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:51:51.579257    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:51:51.579266    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:51:51.579280    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:51:53.580531    6084 client.go:171] duration metric: took 1m1.441629852s to LocalClient.Create
	I0805 16:51:55.582669    6084 start.go:128] duration metric: took 1m3.475296288s to createHost
	I0805 16:51:55.582685    6084 start.go:83] releasing machines lock for "offline-docker-642000", held for 1m3.475444651s
	W0805 16:51:55.582708    6084 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ba:c0:40:dc:42:f1
	I0805 16:51:55.583061    6084 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:51:55.583118    6084 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:51:55.592480    6084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53667
	I0805 16:51:55.592908    6084 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:51:55.593392    6084 main.go:141] libmachine: Using API Version  1
	I0805 16:51:55.593406    6084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:51:55.593640    6084 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:51:55.594039    6084 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:51:55.594083    6084 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:51:55.602874    6084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53669
	I0805 16:51:55.603324    6084 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:51:55.603697    6084 main.go:141] libmachine: Using API Version  1
	I0805 16:51:55.603714    6084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:51:55.603914    6084 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:51:55.604044    6084 main.go:141] libmachine: (offline-docker-642000) Calling .GetState
	I0805 16:51:55.604145    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:55.604212    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:51:55.605332    6084 main.go:141] libmachine: (offline-docker-642000) Calling .DriverName
	I0805 16:51:55.647760    6084 out.go:177] * Deleting "offline-docker-642000" in hyperkit ...
	I0805 16:51:55.669083    6084 main.go:141] libmachine: (offline-docker-642000) Calling .Remove
	I0805 16:51:55.669225    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:55.669235    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:55.669298    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:51:55.670243    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:55.670291    6084 main.go:141] libmachine: (offline-docker-642000) DBG | waiting for graceful shutdown
	I0805 16:51:56.670407    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:56.670562    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:51:56.671462    6084 main.go:141] libmachine: (offline-docker-642000) DBG | waiting for graceful shutdown
	I0805 16:51:57.673173    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:57.673291    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:51:57.675020    6084 main.go:141] libmachine: (offline-docker-642000) DBG | waiting for graceful shutdown
	I0805 16:51:58.676346    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:58.676407    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:51:58.677160    6084 main.go:141] libmachine: (offline-docker-642000) DBG | waiting for graceful shutdown
	I0805 16:51:59.679281    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:59.679336    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:51:59.679957    6084 main.go:141] libmachine: (offline-docker-642000) DBG | waiting for graceful shutdown
	I0805 16:52:00.681235    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:00.681268    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6132
	I0805 16:52:00.682350    6084 main.go:141] libmachine: (offline-docker-642000) DBG | sending sigkill
	I0805 16:52:00.682360    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W0805 16:52:00.693977    6084 out.go:239] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ba:c0:40:dc:42:f1
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ba:c0:40:dc:42:f1
	I0805 16:52:00.694003    6084 start.go:729] Will try again in 5 seconds ...
	I0805 16:52:00.704655    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:52:00 WARN : hyperkit: failed to read stdout: EOF
	I0805 16:52:00.704675    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:52:00 WARN : hyperkit: failed to read stderr: EOF
	I0805 16:52:05.696085    6084 start.go:360] acquireMachinesLock for offline-docker-642000: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:52:58.381155    6084 start.go:364] duration metric: took 52.684857028s to acquireMachinesLock for "offline-docker-642000"
	I0805 16:52:58.381195    6084 start.go:93] Provisioning new machine with config: &{Name:offline-docker-642000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.30.3 ClusterName:offline-docker-642000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:52:58.381250    6084 start.go:125] createHost starting for "" (driver="hyperkit")
	I0805 16:52:58.402532    6084 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0805 16:52:58.402617    6084 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:52:58.402639    6084 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:52:58.411162    6084 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53677
	I0805 16:52:58.411518    6084 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:52:58.411841    6084 main.go:141] libmachine: Using API Version  1
	I0805 16:52:58.411852    6084 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:52:58.412035    6084 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:52:58.412167    6084 main.go:141] libmachine: (offline-docker-642000) Calling .GetMachineName
	I0805 16:52:58.412277    6084 main.go:141] libmachine: (offline-docker-642000) Calling .DriverName
	I0805 16:52:58.412380    6084 start.go:159] libmachine.API.Create for "offline-docker-642000" (driver="hyperkit")
	I0805 16:52:58.412398    6084 client.go:168] LocalClient.Create starting
	I0805 16:52:58.412426    6084 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem
	I0805 16:52:58.412483    6084 main.go:141] libmachine: Decoding PEM data...
	I0805 16:52:58.412498    6084 main.go:141] libmachine: Parsing certificate...
	I0805 16:52:58.412545    6084 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem
	I0805 16:52:58.412586    6084 main.go:141] libmachine: Decoding PEM data...
	I0805 16:52:58.412597    6084 main.go:141] libmachine: Parsing certificate...
	I0805 16:52:58.412611    6084 main.go:141] libmachine: Running pre-create checks...
	I0805 16:52:58.412616    6084 main.go:141] libmachine: (offline-docker-642000) Calling .PreCreateCheck
	I0805 16:52:58.412682    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:58.412702    6084 main.go:141] libmachine: (offline-docker-642000) Calling .GetConfigRaw
	I0805 16:52:58.481282    6084 main.go:141] libmachine: Creating machine...
	I0805 16:52:58.481304    6084 main.go:141] libmachine: (offline-docker-642000) Calling .Create
	I0805 16:52:58.481400    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:58.481533    6084 main.go:141] libmachine: (offline-docker-642000) DBG | I0805 16:52:58.481388    6282 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:52:58.481575    6084 main.go:141] libmachine: (offline-docker-642000) Downloading /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1122/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 16:52:58.663729    6084 main.go:141] libmachine: (offline-docker-642000) DBG | I0805 16:52:58.663627    6282 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/id_rsa...
	I0805 16:52:58.817047    6084 main.go:141] libmachine: (offline-docker-642000) DBG | I0805 16:52:58.816973    6282 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/offline-docker-642000.rawdisk...
	I0805 16:52:58.817058    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Writing magic tar header
	I0805 16:52:58.817066    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Writing SSH key tar header
	I0805 16:52:58.817636    6084 main.go:141] libmachine: (offline-docker-642000) DBG | I0805 16:52:58.817587    6282 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000 ...
	I0805 16:52:59.190676    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:59.190698    6084 main.go:141] libmachine: (offline-docker-642000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/hyperkit.pid
	I0805 16:52:59.190776    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Using UUID 3be4b88a-f728-4b37-b5ca-98af1624f607
	I0805 16:52:59.215934    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Generated MAC 7a:ec:7e:68:e:7f
	I0805 16:52:59.215950    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-642000
	I0805 16:52:59.215992    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:52:59 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3be4b88a-f728-4b37-b5ca-98af1624f607", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00059a1b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0805 16:52:59.216020    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:52:59 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3be4b88a-f728-4b37-b5ca-98af1624f607", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00059a1b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLi
ne:"", process:(*os.Process)(nil)}
	I0805 16:52:59.216115    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:52:59 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "3be4b88a-f728-4b37-b5ca-98af1624f607", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/offline-docker-642000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/bzimage,
/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-642000"}
	I0805 16:52:59.216169    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:52:59 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 3be4b88a-f728-4b37-b5ca-98af1624f607 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/offline-docker-642000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machi
nes/offline-docker-642000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=offline-docker-642000"
	I0805 16:52:59.216179    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:52:59 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:52:59.219320    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:52:59 DEBUG: hyperkit: Pid is 6283
	I0805 16:52:59.219800    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 0
	I0805 16:52:59.219815    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:59.219894    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6283
	I0805 16:52:59.221174    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for 7a:ec:7e:68:e:7f in /var/db/dhcpd_leases ...
	I0805 16:52:59.221251    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:52:59.221266    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:52:59.221281    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:52:59.221287    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:52:59.221295    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:52:59.221302    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:52:59.221321    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:52:59.221336    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:52:59.221355    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:52:59.221382    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:52:59.221407    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:52:59.221422    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:52:59.221434    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:52:59.221449    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:52:59.221459    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:52:59.221469    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:52:59.221485    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:52:59.221501    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:52:59.226849    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:52:59 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:52:59.235060    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:52:59 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/offline-docker-642000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:52:59.236000    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:52:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:52:59.236019    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:52:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:52:59.236030    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:52:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:52:59.236040    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:52:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:52:59.612817    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:52:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:52:59.612832    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:52:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:52:59.727561    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:52:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:52:59.727581    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:52:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:52:59.727596    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:52:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:52:59.727607    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:52:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:52:59.728465    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:52:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:52:59.728477    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:52:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:53:01.221652    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 1
	I0805 16:53:01.221668    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:01.221747    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6283
	I0805 16:53:01.222539    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for 7a:ec:7e:68:e:7f in /var/db/dhcpd_leases ...
	I0805 16:53:01.222601    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:53:01.222612    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:53:01.222621    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:53:01.222628    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:53:01.222663    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:53:01.222676    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:53:01.222692    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:53:01.222701    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:53:01.222712    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:53:01.222722    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:53:01.222730    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:53:01.222736    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:53:01.222743    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:53:01.222749    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:53:01.222757    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:53:01.222764    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:53:01.222777    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:53:01.222794    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:53:03.223541    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 2
	I0805 16:53:03.223558    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:03.223625    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6283
	I0805 16:53:03.224529    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for 7a:ec:7e:68:e:7f in /var/db/dhcpd_leases ...
	I0805 16:53:03.224596    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:53:03.224609    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:53:03.224629    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:53:03.224636    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:53:03.224651    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:53:03.224659    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:53:03.224684    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:53:03.224695    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:53:03.224701    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:53:03.224712    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:53:03.224721    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:53:03.224730    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:53:03.224738    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:53:03.224754    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:53:03.224766    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:53:03.224774    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:53:03.224779    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:53:03.224788    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:53:05.142400    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:53:05 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0805 16:53:05.142583    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:53:05 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0805 16:53:05.142593    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:53:05 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0805 16:53:05.162487    6084 main.go:141] libmachine: (offline-docker-642000) DBG | 2024/08/05 16:53:05 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0805 16:53:05.225758    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 3
	I0805 16:53:05.225785    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:05.225981    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6283
	I0805 16:53:05.227577    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for 7a:ec:7e:68:e:7f in /var/db/dhcpd_leases ...
	I0805 16:53:05.227751    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:53:05.227769    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:53:05.227789    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:53:05.227798    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:53:05.227811    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:53:05.227823    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:53:05.227832    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:53:05.227843    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:53:05.227860    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:53:05.227871    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:53:05.227882    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:53:05.227899    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:53:05.227914    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:53:05.227925    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:53:05.227934    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:53:05.227943    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:53:05.227954    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:53:05.227966    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:53:07.229416    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 4
	I0805 16:53:07.229432    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:07.229535    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6283
	I0805 16:53:07.230327    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for 7a:ec:7e:68:e:7f in /var/db/dhcpd_leases ...
	I0805 16:53:07.230400    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:53:07.230421    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:53:07.230431    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:53:07.230444    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:53:07.230454    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:53:07.230463    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:53:07.230473    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:53:07.230482    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:53:07.230490    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:53:07.230497    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:53:07.230523    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:53:07.230533    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:53:07.230542    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:53:07.230553    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:53:07.230568    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:53:07.230577    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:53:07.230587    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:53:07.230596    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:53:09.232608    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 5
	I0805 16:53:09.232623    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:09.232683    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6283
	I0805 16:53:09.233506    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for 7a:ec:7e:68:e:7f in /var/db/dhcpd_leases ...
	I0805 16:53:09.233565    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:53:09.233575    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:53:09.233585    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:53:09.233592    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:53:09.233605    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:53:09.233612    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:53:09.233620    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:53:09.233625    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:53:09.233632    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:53:09.233638    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:53:09.233650    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:53:09.233659    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:53:09.233666    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:53:09.233673    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:53:09.233681    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:53:09.233687    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:53:09.233693    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:53:09.233701    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:53:11.235778    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 6
	I0805 16:53:11.235793    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:11.235861    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6283
	I0805 16:53:11.236647    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for 7a:ec:7e:68:e:7f in /var/db/dhcpd_leases ...
	I0805 16:53:11.236690    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:53:11.236704    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:53:11.236716    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:53:11.236724    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:53:11.236730    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:53:11.236739    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:53:11.236749    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:53:11.236763    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:53:11.236775    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:53:11.236783    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:53:11.236792    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:53:11.236809    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:53:11.236817    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:53:11.236823    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:53:11.236831    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:53:11.236838    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:53:11.236848    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:53:11.236865    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:53:13.237685    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 7
	I0805 16:53:13.237697    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:13.237773    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6283
	I0805 16:53:13.238538    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for 7a:ec:7e:68:e:7f in /var/db/dhcpd_leases ...
	I0805 16:53:13.238568    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:53:13.238589    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:53:13.238610    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:53:13.238619    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:53:13.238626    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:53:13.238650    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:53:13.238669    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:53:13.238679    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:53:13.238686    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:53:13.238695    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:53:13.238702    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:53:13.238710    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:53:13.238716    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:53:13.238729    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:53:13.238736    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:53:13.238744    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:53:13.238753    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:53:13.238761    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:53:15.240166    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 8
	I0805 16:53:15.240179    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:15.240258    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6283
	I0805 16:53:15.241222    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for 7a:ec:7e:68:e:7f in /var/db/dhcpd_leases ...
	I0805 16:53:15.241261    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:53:15.241270    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:53:15.241280    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:53:15.241287    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:53:15.241294    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:53:15.241301    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:53:15.241308    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:53:15.241314    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:53:15.241326    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:53:15.241335    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:53:15.241343    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:53:15.241353    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:53:15.241360    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:53:15.241369    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:53:15.241386    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:53:15.241400    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:53:15.241417    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:53:15.241426    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:53:17.241458    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 9
	I0805 16:53:17.241472    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:17.241573    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6283
	I0805 16:53:17.242388    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for 7a:ec:7e:68:e:7f in /var/db/dhcpd_leases ...
	I0805 16:53:17.242425    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:53:17.242436    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:53:17.242467    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:53:17.242481    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:53:17.242491    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:53:17.242499    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:53:17.242512    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:53:17.242522    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:53:17.242530    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:53:17.242538    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:53:17.242544    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:53:17.242552    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:53:17.242560    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:53:17.242566    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:53:17.242578    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:53:17.242590    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:53:17.242598    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:53:17.242606    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:53:19.243921    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 10
	I0805 16:53:19.243934    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:19.244019    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6283
	I0805 16:53:19.244811    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for 7a:ec:7e:68:e:7f in /var/db/dhcpd_leases ...
	I0805 16:53:19.244849    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:53:19.244856    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:53:19.244874    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:53:19.244896    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:53:19.244908    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:53:19.244915    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:53:19.244923    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:53:19.244933    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:53:19.244943    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:53:19.244951    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:53:19.244960    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:53:19.244968    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:53:19.244976    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:53:19.244984    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:53:19.244992    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:53:19.244999    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:53:19.245006    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:53:19.245012    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:53:21.245977    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 11
	I0805 16:53:21.245991    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:21.246115    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6283
	I0805 16:53:21.246886    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for 7a:ec:7e:68:e:7f in /var/db/dhcpd_leases ...
	I0805 16:53:21.246930    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:53:21.246940    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:53:21.246949    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:53:21.246966    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:53:21.246983    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:53:21.246990    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:53:21.246998    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:53:21.247005    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:53:21.247013    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:53:21.247020    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:53:21.247026    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:53:21.247034    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:53:21.247042    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:53:21.247050    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:53:21.247055    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:53:21.247068    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:53:21.247082    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:53:21.247092    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:53:23.247328    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 12
	I0805 16:53:23.247345    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:23.247426    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6283
	I0805 16:53:23.248211    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for 7a:ec:7e:68:e:7f in /var/db/dhcpd_leases ...
	I0805 16:53:23.248259    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:53:23.248272    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:53:23.248291    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:53:23.248303    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:53:23.248311    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:53:23.248320    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:53:23.248327    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:53:23.248335    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:53:23.248342    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:53:23.248354    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:53:23.248361    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:53:23.248376    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:53:23.248384    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:53:23.248391    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:53:23.248398    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:53:23.248409    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:53:23.248418    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:53:23.248427    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:53:25.249537    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 13
	I0805 16:53:25.249553    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:25.249599    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6283
	I0805 16:53:25.250451    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for 7a:ec:7e:68:e:7f in /var/db/dhcpd_leases ...
	I0805 16:53:25.250482    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:53:25.250493    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:53:25.250503    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:53:25.250512    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:53:25.250526    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:53:25.250533    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:53:25.250541    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:53:25.250548    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:53:25.250557    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:53:25.250565    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:53:25.250571    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:53:25.250577    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:53:25.250592    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:53:25.250605    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:53:25.250614    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:53:25.250621    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:53:25.250628    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:53:25.250637    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:53:27.251452    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 14
	I0805 16:53:27.251466    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:27.251595    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6283
	I0805 16:53:27.252499    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for 7a:ec:7e:68:e:7f in /var/db/dhcpd_leases ...
	I0805 16:53:27.252541    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:53:27.252549    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:53:27.252565    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:53:27.252572    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:53:27.252586    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:53:27.252600    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:53:27.252608    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:53:27.252617    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:53:27.252628    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:53:27.252636    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:53:27.252650    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:53:27.252659    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:53:27.252668    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:53:27.252676    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:53:27.252683    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:53:27.252689    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:53:27.252702    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:53:27.252711    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:53:29.252715    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 15
	I0805 16:53:29.252744    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:29.252827    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6283
	I0805 16:53:29.253702    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for 7a:ec:7e:68:e:7f in /var/db/dhcpd_leases ...
	I0805 16:53:29.253749    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:53:29.253761    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:53:29.253772    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:53:29.253779    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:53:29.253786    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:53:29.253795    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:53:29.253810    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:53:29.253829    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:53:29.253844    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:53:29.253856    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:53:29.253865    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:53:29.253872    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:53:29.253879    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:53:29.253886    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:53:29.253900    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:53:29.253909    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:53:29.253926    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:53:29.253940    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:53:31.254964    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 16
	I0805 16:53:31.254979    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:31.255067    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6283
	I0805 16:53:31.255853    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for 7a:ec:7e:68:e:7f in /var/db/dhcpd_leases ...
	I0805 16:53:31.255905    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:53:31.255917    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:53:31.255927    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:53:31.255942    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:53:31.255949    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:53:31.255954    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:53:31.255968    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:53:31.255980    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:53:31.255988    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:53:31.255994    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:53:31.256005    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:53:31.256018    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:53:31.256035    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:53:31.256048    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:53:31.256058    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:53:31.256066    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:53:31.256074    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:53:31.256085    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:53:33.256229    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 17
	I0805 16:53:33.256247    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:33.256309    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6283
	I0805 16:53:33.257106    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for 7a:ec:7e:68:e:7f in /var/db/dhcpd_leases ...
	I0805 16:53:33.257154    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:53:33.257167    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:53:33.257176    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:53:33.257182    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:53:33.257190    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:53:33.257197    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:53:33.257204    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:53:33.257211    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:53:33.257220    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:53:33.257236    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:53:33.257249    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:53:33.257265    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:53:33.257278    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:53:33.257288    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:53:33.257296    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:53:33.257302    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:53:33.257308    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:53:33.257322    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:53:35.258979    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 18
	I0805 16:53:35.258998    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:35.259113    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6283
	I0805 16:53:35.259915    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for 7a:ec:7e:68:e:7f in /var/db/dhcpd_leases ...
	I0805 16:53:35.259975    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:53:35.260001    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:53:35.260043    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:53:35.260057    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:53:35.260075    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:53:35.260087    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:53:35.260099    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:53:35.260106    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:53:35.260116    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:53:35.260124    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:53:35.260133    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:53:35.260141    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:53:35.260149    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:53:35.260156    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:53:35.260163    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:53:35.260170    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:53:35.260180    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:53:35.260187    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:53:37.262209    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 19
	I0805 16:53:37.262221    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:37.262253    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6283
	I0805 16:53:37.263099    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for 7a:ec:7e:68:e:7f in /var/db/dhcpd_leases ...
	I0805 16:53:37.263163    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:53:37.263180    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:53:37.263200    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:53:37.263213    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:53:37.263224    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:53:37.263233    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:53:37.263250    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:53:37.263263    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:53:37.263274    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:53:37.263280    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:53:37.263286    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:53:37.263297    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:53:37.263304    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:53:37.263313    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:53:37.263325    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:53:37.263338    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:53:37.263347    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:53:37.263354    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:53:39.264186    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 20
	I0805 16:53:39.264200    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:39.264320    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6283
	I0805 16:53:39.265155    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for 7a:ec:7e:68:e:7f in /var/db/dhcpd_leases ...
	I0805 16:53:39.265202    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:53:39.265214    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:53:39.265221    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:53:39.265228    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:53:39.265243    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:53:39.265256    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:53:39.265264    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:53:39.265271    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:53:39.265283    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:53:39.265295    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:53:39.265311    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:53:39.265324    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:53:39.265338    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:53:39.265347    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:53:39.265354    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:53:39.265365    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:53:39.265375    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:53:39.265381    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:53:41.265653    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 21
	I0805 16:53:41.265702    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:41.265780    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6283
	I0805 16:53:41.266587    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for 7a:ec:7e:68:e:7f in /var/db/dhcpd_leases ...
	I0805 16:53:41.266633    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:53:41.266642    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:53:41.266666    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:53:41.266677    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:53:41.266684    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:53:41.266691    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:53:41.266698    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:53:41.266707    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:53:41.266716    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:53:41.266722    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:53:41.266729    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:53:41.266735    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:53:41.266742    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:53:41.266761    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:53:41.266772    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:53:41.266788    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:53:41.266799    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:53:41.266814    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:53:43.268033    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 22
	I0805 16:53:43.268048    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:43.268172    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6283
	I0805 16:53:43.268979    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for 7a:ec:7e:68:e:7f in /var/db/dhcpd_leases ...
	I0805 16:53:43.269027    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:53:43.269049    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:53:43.269069    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:53:43.269077    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:53:43.269091    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:53:43.269099    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:53:43.269106    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:53:43.269114    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:53:43.269121    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:53:43.269127    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:53:43.269138    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:53:43.269161    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:53:43.269177    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:53:43.269189    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:53:43.269197    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:53:43.269205    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:53:43.269219    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:53:43.269226    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:53:45.271193    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 23
	I0805 16:53:45.271207    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:45.271216    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6283
	I0805 16:53:45.272078    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for 7a:ec:7e:68:e:7f in /var/db/dhcpd_leases ...
	I0805 16:53:45.272100    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:53:45.272115    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:53:45.272124    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:53:45.272133    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:53:45.272152    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:53:45.272167    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:53:45.272177    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:53:45.272189    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:53:45.272204    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:53:45.272217    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:53:45.272230    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:53:45.272239    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:53:45.272245    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:53:45.272256    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:53:45.272264    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:53:45.272271    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:53:45.272286    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:53:45.272300    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:53:47.274293    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 24
	I0805 16:53:47.274306    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:47.274359    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6283
	I0805 16:53:47.275140    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for 7a:ec:7e:68:e:7f in /var/db/dhcpd_leases ...
	I0805 16:53:47.275169    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:53:47.275176    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:53:47.275185    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:53:47.275190    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:53:47.275205    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:53:47.275219    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:53:47.275227    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:53:47.275233    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:53:47.275240    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:53:47.275248    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:53:47.275261    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:53:47.275270    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:53:47.275279    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:53:47.275290    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:53:47.275296    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:53:47.275304    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:53:47.275310    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:53:47.275318    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:53:49.277326    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 25
	I0805 16:53:49.277339    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:49.277426    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6283
	I0805 16:53:49.278215    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for 7a:ec:7e:68:e:7f in /var/db/dhcpd_leases ...
	I0805 16:53:49.278270    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:53:49.278284    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:53:49.278296    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:53:49.278312    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:53:49.278320    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:53:49.278327    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:53:49.278334    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:53:49.278340    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:53:49.278347    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:53:49.278353    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:53:49.278359    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:53:49.278369    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:53:49.278381    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:53:49.278389    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:53:49.278405    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:53:49.278417    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:53:49.278427    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:53:49.278435    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:53:51.278476    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 26
	I0805 16:53:51.278490    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:51.278550    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6283
	I0805 16:53:51.279336    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for 7a:ec:7e:68:e:7f in /var/db/dhcpd_leases ...
	I0805 16:53:51.279377    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:53:51.279399    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:53:51.279407    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:53:51.279432    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:53:51.279443    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:53:51.279450    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:53:51.279458    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:53:51.279479    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:53:51.279492    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:53:51.279507    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:53:51.279520    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:53:51.279528    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:53:51.279534    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:53:51.279541    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:53:51.279550    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:53:51.279557    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:53:51.279566    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:53:51.279575    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:53:53.280323    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 27
	I0805 16:53:53.280345    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:53.280443    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6283
	I0805 16:53:53.281244    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for 7a:ec:7e:68:e:7f in /var/db/dhcpd_leases ...
	I0805 16:53:53.281266    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:53:53.281279    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:53:53.281290    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:53:53.281300    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:53:53.281321    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:53:53.281337    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:53:53.281357    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:53:53.281365    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:53:53.281372    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:53:53.281385    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:53:53.281393    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:53:53.281399    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:53:53.281406    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:53:53.281411    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:53:53.281417    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:53:53.281429    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:53:53.281436    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:53:53.281444    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:53:55.283010    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 28
	I0805 16:53:55.283026    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:55.283066    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6283
	I0805 16:53:55.283980    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for 7a:ec:7e:68:e:7f in /var/db/dhcpd_leases ...
	I0805 16:53:55.284003    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:53:55.284015    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:53:55.284027    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:53:55.284043    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:53:55.284054    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:53:55.284065    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:53:55.284077    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:53:55.284098    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:53:55.284107    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:53:55.284126    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:53:55.284138    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:53:55.284149    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:53:55.284161    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:53:55.284174    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:53:55.284184    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:53:55.284191    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:53:55.284198    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:53:55.284215    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:53:57.284820    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Attempt 29
	I0805 16:53:57.284833    6084 main.go:141] libmachine: (offline-docker-642000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:57.284929    6084 main.go:141] libmachine: (offline-docker-642000) DBG | hyperkit pid from json: 6283
	I0805 16:53:57.285683    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Searching for 7a:ec:7e:68:e:7f in /var/db/dhcpd_leases ...
	I0805 16:53:57.285740    6084 main.go:141] libmachine: (offline-docker-642000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:53:57.285755    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:53:57.285767    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:53:57.285777    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:53:57.285786    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:53:57.285792    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:53:57.285798    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:53:57.285809    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:53:57.285816    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:53:57.285825    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:53:57.285833    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:53:57.285839    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:53:57.285848    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:53:57.285855    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:53:57.285862    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:53:57.285868    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:53:57.285876    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:53:57.285893    6084 main.go:141] libmachine: (offline-docker-642000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:53:59.286999    6084 client.go:171] duration metric: took 1m0.874377982s to LocalClient.Create
	I0805 16:54:01.287979    6084 start.go:128] duration metric: took 1m2.906494879s to createHost
	I0805 16:54:01.287995    6084 start.go:83] releasing machines lock for "offline-docker-642000", held for 1m2.906592966s
	W0805 16:54:01.288125    6084 out.go:239] * Failed to start hyperkit VM. Running "minikube delete -p offline-docker-642000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7a:ec:7e:68:e:7f
	* Failed to start hyperkit VM. Running "minikube delete -p offline-docker-642000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7a:ec:7e:68:e:7f
	I0805 16:54:01.349309    6084 out.go:177] 
	W0805 16:54:01.370272    6084 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7a:ec:7e:68:e:7f
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7a:ec:7e:68:e:7f
	W0805 16:54:01.370282    6084 out.go:239] * 
	* 
	W0805 16:54:01.370952    6084 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:54:01.432278    6084 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-amd64 start -p offline-docker-642000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-08-05 16:54:01.537265 -0700 PDT m=+4021.109897064
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-642000 -n offline-docker-642000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p offline-docker-642000 -n offline-docker-642000: exit status 7 (80.672248ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 16:54:01.616032    6302 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0805 16:54:01.616052    6302 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-642000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "offline-docker-642000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-642000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-642000: (5.270257908s)
--- FAIL: TestOffline (195.33s)

                                                
                                    
x
+
TestCertOptions (252.11s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-231000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
E0805 17:00:35.334574    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/skaffold-862000/client.crt: no such file or directory
E0805 17:01:03.023500    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/skaffold-862000/client.crt: no such file or directory
E0805 17:01:19.296382    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0805 17:01:50.699456    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-options-231000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : exit status 80 (4m6.441962848s)

                                                
                                                
-- stdout --
	* [cert-options-231000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-options-231000" primary control-plane node in "cert-options-231000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-options-231000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 16:54:b:f0:2e:3b
	* Failed to start hyperkit VM. Running "minikube delete -p cert-options-231000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ba:ec:49:bf:bf:fe
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ba:ec:49:bf:bf:fe
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-options-231000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-231000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p cert-options-231000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 50 (158.94983ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-231000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-amd64 -p cert-options-231000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 50
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-231000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-231000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p cert-options-231000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 50 (160.482271ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-231000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-amd64 ssh -p cert-options-231000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 50
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node cert-options-231000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-08-05 17:03:29.542965 -0700 PDT m=+4589.070449404
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-231000 -n cert-options-231000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-231000 -n cert-options-231000: exit status 7 (78.019925ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 17:03:29.619352    6884 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0805 17:03:29.619376    6884 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-231000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-options-231000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-231000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-231000: (5.230582967s)
--- FAIL: TestCertOptions (252.11s)

                                                
                                    
x
+
TestCertExpiration (1739.23s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-698000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-698000 --memory=2048 --cert-expiration=3m --driver=hyperkit : exit status 80 (4m7.042154519s)

                                                
                                                
-- stdout --
	* [cert-expiration-698000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-expiration-698000" primary control-plane node in "cert-expiration-698000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "cert-expiration-698000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 76:85:9:c1:60:b8
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-698000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 12:86:4e:8d:50:bc
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 12:86:4e:8d:50:bc
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-expiration-698000 --memory=2048 --cert-expiration=3m --driver=hyperkit " : exit status 80
E0805 17:03:13.755982    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-698000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
E0805 17:05:35.335158    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/skaffold-862000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-698000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : exit status 80 (21m46.84468766s)

                                                
                                                
-- stdout --
	* [cert-expiration-698000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-698000" primary control-plane node in "cert-expiration-698000" cluster
	* Updating the running hyperkit "cert-expiration-698000" VM ...
	* Updating the running hyperkit "cert-expiration-698000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-698000" may fix it: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-amd64 start -p cert-expiration-698000 --memory=2048 --cert-expiration=8760h --driver=hyperkit " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-698000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-698000" primary control-plane node in "cert-expiration-698000" cluster
	* Updating the running hyperkit "cert-expiration-698000" VM ...
	* Updating the running hyperkit "cert-expiration-698000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* Failed to start hyperkit VM. Running "minikube delete -p cert-expiration-698000" may fix it: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: error getting ip during provisioning: IP address is not set
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-08-05 17:27:13.170754 -0700 PDT m=+6012.609417013
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-698000 -n cert-expiration-698000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-698000 -n cert-expiration-698000: exit status 7 (78.233312ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 17:27:13.246965    8268 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0805 17:27:13.246991    8268 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-698000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "cert-expiration-698000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-698000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-698000: (5.264082378s)
--- FAIL: TestCertExpiration (1739.23s)

                                                
                                    
x
+
TestDockerFlags (252.67s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-878000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
E0805 16:55:35.289279    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/skaffold-862000/client.crt: no such file or directory
E0805 16:55:35.295709    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/skaffold-862000/client.crt: no such file or directory
E0805 16:55:35.305834    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/skaffold-862000/client.crt: no such file or directory
E0805 16:55:35.326707    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/skaffold-862000/client.crt: no such file or directory
E0805 16:55:35.367971    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/skaffold-862000/client.crt: no such file or directory
E0805 16:55:35.448111    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/skaffold-862000/client.crt: no such file or directory
E0805 16:55:35.608428    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/skaffold-862000/client.crt: no such file or directory
E0805 16:55:35.930542    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/skaffold-862000/client.crt: no such file or directory
E0805 16:55:36.572667    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/skaffold-862000/client.crt: no such file or directory
E0805 16:55:37.852878    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/skaffold-862000/client.crt: no such file or directory
E0805 16:55:40.415008    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/skaffold-862000/client.crt: no such file or directory
E0805 16:55:45.535221    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/skaffold-862000/client.crt: no such file or directory
E0805 16:55:55.776489    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/skaffold-862000/client.crt: no such file or directory
E0805 16:56:02.306933    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0805 16:56:16.256998    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/skaffold-862000/client.crt: no such file or directory
E0805 16:56:19.252117    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0805 16:56:50.655449    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
E0805 16:56:57.218008    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/skaffold-862000/client.crt: no such file or directory
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p docker-flags-878000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.935175657s)

                                                
                                                
-- stdout --
	* [docker-flags-878000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "docker-flags-878000" primary control-plane node in "docker-flags-878000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "docker-flags-878000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:55:10.081578    6352 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:55:10.082174    6352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:55:10.082423    6352 out.go:304] Setting ErrFile to fd 2...
	I0805 16:55:10.082434    6352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:55:10.082806    6352 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:55:10.084344    6352 out.go:298] Setting JSON to false
	I0805 16:55:10.107080    6352 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":5081,"bootTime":1722897029,"procs":444,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0805 16:55:10.107177    6352 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:55:10.130379    6352 out.go:177] * [docker-flags-878000] minikube v1.33.1 on Darwin 14.5
	I0805 16:55:10.171443    6352 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:55:10.171476    6352 notify.go:220] Checking for updates...
	I0805 16:55:10.213231    6352 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:55:10.234379    6352 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0805 16:55:10.255356    6352 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:55:10.276246    6352 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:55:10.296338    6352 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:55:10.317757    6352 config.go:182] Loaded profile config "force-systemd-flag-556000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:55:10.317858    6352 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:55:10.346203    6352 out.go:177] * Using the hyperkit driver based on user configuration
	I0805 16:55:10.386296    6352 start.go:297] selected driver: hyperkit
	I0805 16:55:10.386311    6352 start.go:901] validating driver "hyperkit" against <nil>
	I0805 16:55:10.386322    6352 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:55:10.389402    6352 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:55:10.389533    6352 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19373-1122/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0805 16:55:10.397979    6352 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0805 16:55:10.401908    6352 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:55:10.401929    6352 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0805 16:55:10.401964    6352 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 16:55:10.402155    6352 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0805 16:55:10.402213    6352 cni.go:84] Creating CNI manager for ""
	I0805 16:55:10.402229    6352 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 16:55:10.402235    6352 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 16:55:10.402310    6352 start.go:340] cluster config:
	{Name:docker-flags-878000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-878000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:55:10.402394    6352 iso.go:125] acquiring lock: {Name:mk71e8d40232ece83c91dc82184f03ab93aee56e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:55:10.443339    6352 out.go:177] * Starting "docker-flags-878000" primary control-plane node in "docker-flags-878000" cluster
	I0805 16:55:10.464445    6352 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:55:10.464480    6352 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0805 16:55:10.464493    6352 cache.go:56] Caching tarball of preloaded images
	I0805 16:55:10.464602    6352 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:55:10.464612    6352 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:55:10.464691    6352 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/docker-flags-878000/config.json ...
	I0805 16:55:10.464707    6352 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/docker-flags-878000/config.json: {Name:mk475b16fa4df398a5ff9683b370eecd874faf0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:55:10.465022    6352 start.go:360] acquireMachinesLock for docker-flags-878000: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:56:07.523508    6352 start.go:364] duration metric: took 57.058269091s to acquireMachinesLock for "docker-flags-878000"
	I0805 16:56:07.523550    6352 start.go:93] Provisioning new machine with config: &{Name:docker-flags-878000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSH
Key: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-878000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:56:07.523617    6352 start.go:125] createHost starting for "" (driver="hyperkit")
	I0805 16:56:07.544968    6352 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0805 16:56:07.545096    6352 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:56:07.545128    6352 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:56:07.553769    6352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53711
	I0805 16:56:07.554138    6352 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:56:07.554556    6352 main.go:141] libmachine: Using API Version  1
	I0805 16:56:07.554568    6352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:56:07.554799    6352 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:56:07.554920    6352 main.go:141] libmachine: (docker-flags-878000) Calling .GetMachineName
	I0805 16:56:07.555010    6352 main.go:141] libmachine: (docker-flags-878000) Calling .DriverName
	I0805 16:56:07.555120    6352 start.go:159] libmachine.API.Create for "docker-flags-878000" (driver="hyperkit")
	I0805 16:56:07.555143    6352 client.go:168] LocalClient.Create starting
	I0805 16:56:07.555185    6352 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem
	I0805 16:56:07.555236    6352 main.go:141] libmachine: Decoding PEM data...
	I0805 16:56:07.555253    6352 main.go:141] libmachine: Parsing certificate...
	I0805 16:56:07.555317    6352 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem
	I0805 16:56:07.555355    6352 main.go:141] libmachine: Decoding PEM data...
	I0805 16:56:07.555367    6352 main.go:141] libmachine: Parsing certificate...
	I0805 16:56:07.555380    6352 main.go:141] libmachine: Running pre-create checks...
	I0805 16:56:07.555387    6352 main.go:141] libmachine: (docker-flags-878000) Calling .PreCreateCheck
	I0805 16:56:07.555474    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:07.555663    6352 main.go:141] libmachine: (docker-flags-878000) Calling .GetConfigRaw
	I0805 16:56:07.588727    6352 main.go:141] libmachine: Creating machine...
	I0805 16:56:07.588737    6352 main.go:141] libmachine: (docker-flags-878000) Calling .Create
	I0805 16:56:07.588823    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:07.588944    6352 main.go:141] libmachine: (docker-flags-878000) DBG | I0805 16:56:07.588818    6375 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:56:07.588997    6352 main.go:141] libmachine: (docker-flags-878000) Downloading /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1122/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 16:56:07.793486    6352 main.go:141] libmachine: (docker-flags-878000) DBG | I0805 16:56:07.793424    6375 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/id_rsa...
	I0805 16:56:07.925194    6352 main.go:141] libmachine: (docker-flags-878000) DBG | I0805 16:56:07.925074    6375 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/docker-flags-878000.rawdisk...
	I0805 16:56:07.925212    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Writing magic tar header
	I0805 16:56:07.925229    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Writing SSH key tar header
	I0805 16:56:07.925789    6352 main.go:141] libmachine: (docker-flags-878000) DBG | I0805 16:56:07.925748    6375 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000 ...
	I0805 16:56:08.299294    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:08.299321    6352 main.go:141] libmachine: (docker-flags-878000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/hyperkit.pid
	I0805 16:56:08.299331    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Using UUID fb95835a-b26d-40ca-904e-be3f5b65f888
	I0805 16:56:08.325246    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Generated MAC 7a:9b:8b:86:47:f3
	I0805 16:56:08.325270    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-878000
	I0805 16:56:08.325330    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:56:08 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"fb95835a-b26d-40ca-904e-be3f5b65f888", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0805 16:56:08.325366    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:56:08 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"fb95835a-b26d-40ca-904e-be3f5b65f888", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0805 16:56:08.325432    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:56:08 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "fb95835a-b26d-40ca-904e-be3f5b65f888", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/docker-flags-878000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/bzimage,/Users/jenkins/m
inikube-integration/19373-1122/.minikube/machines/docker-flags-878000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-878000"}
	I0805 16:56:08.325480    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:56:08 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U fb95835a-b26d-40ca-904e-be3f5b65f888 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/docker-flags-878000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags
-878000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-878000"
	I0805 16:56:08.325488    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:56:08 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:56:08.328379    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:56:08 DEBUG: hyperkit: Pid is 6376
	I0805 16:56:08.328842    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 0
	I0805 16:56:08.328859    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:08.328951    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:56:08.330080    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for 7a:9b:8b:86:47:f3 in /var/db/dhcpd_leases ...
	I0805 16:56:08.330172    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:56:08.330194    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:56:08.330214    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:56:08.330227    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:56:08.330237    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:56:08.330247    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:56:08.330256    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:56:08.330283    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:56:08.330303    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:56:08.330313    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:56:08.330324    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:56:08.330330    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:56:08.330339    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:56:08.330348    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:56:08.330355    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:56:08.330363    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:56:08.330371    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:56:08.330380    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:56:08.336268    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:56:08 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:56:08.344662    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:56:08 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:56:08.345673    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:56:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:56:08.345700    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:56:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:56:08.345712    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:56:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:56:08.345722    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:56:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:56:08.721622    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:56:08 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:56:08.721641    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:56:08 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:56:08.836336    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:56:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:56:08.836355    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:56:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:56:08.836368    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:56:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:56:08.836380    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:56:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:56:08.837237    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:56:08 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:56:08.837262    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:56:08 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:56:10.331408    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 1
	I0805 16:56:10.331426    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:10.331504    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:56:10.332304    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for 7a:9b:8b:86:47:f3 in /var/db/dhcpd_leases ...
	I0805 16:56:10.332352    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:56:10.332365    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:56:10.332377    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:56:10.332384    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:56:10.332391    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:56:10.332398    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:56:10.332404    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:56:10.332411    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:56:10.332417    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:56:10.332485    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:56:10.332513    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:56:10.332525    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:56:10.332536    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:56:10.332550    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:56:10.332561    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:56:10.332568    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:56:10.332574    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:56:10.332581    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:56:12.333655    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 2
	I0805 16:56:12.333673    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:12.333780    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:56:12.334544    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for 7a:9b:8b:86:47:f3 in /var/db/dhcpd_leases ...
	I0805 16:56:12.334595    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:56:12.334606    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:56:12.334614    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:56:12.334622    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:56:12.334629    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:56:12.334638    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:56:12.334645    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:56:12.334651    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:56:12.334661    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:56:12.334667    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:56:12.334678    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:56:12.334687    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:56:12.334694    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:56:12.334701    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:56:12.334715    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:56:12.334728    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:56:12.334743    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:56:12.334758    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:56:14.261005    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:56:14 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0805 16:56:14.261156    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:56:14 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0805 16:56:14.261168    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:56:14 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0805 16:56:14.281477    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:56:14 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0805 16:56:14.335021    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 3
	I0805 16:56:14.335049    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:14.335215    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:56:14.336642    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for 7a:9b:8b:86:47:f3 in /var/db/dhcpd_leases ...
	I0805 16:56:14.336745    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:56:14.336768    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:56:14.336802    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:56:14.336830    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:56:14.336863    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:56:14.336883    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:56:14.336900    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:56:14.336928    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:56:14.336945    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:56:14.336979    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:56:14.336989    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:56:14.337000    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:56:14.337025    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:56:14.337049    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:56:14.337072    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:56:14.337090    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:56:14.337101    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:56:14.337112    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:56:16.337352    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 4
	I0805 16:56:16.337365    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:16.337476    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:56:16.338272    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for 7a:9b:8b:86:47:f3 in /var/db/dhcpd_leases ...
	I0805 16:56:16.338302    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:56:16.338312    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:56:16.338328    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:56:16.338339    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:56:16.338350    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:56:16.338358    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:56:16.338373    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:56:16.338390    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:56:16.338401    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:56:16.338410    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:56:16.338419    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:56:16.338427    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:56:16.338435    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:56:16.338441    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:56:16.338462    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:56:16.338476    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:56:16.338485    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:56:16.338493    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:56:18.340480    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 5
	I0805 16:56:18.340495    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:18.340561    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:56:18.341336    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for 7a:9b:8b:86:47:f3 in /var/db/dhcpd_leases ...
	I0805 16:56:18.341376    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:56:18.341394    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:56:18.341408    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:56:18.341418    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:56:18.341427    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:56:18.341433    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:56:18.341439    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:56:18.341453    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:56:18.341490    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:56:18.341501    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:56:18.341510    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:56:18.341517    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:56:18.341523    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:56:18.341529    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:56:18.341536    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:56:18.341546    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:56:18.341563    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:56:18.341577    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:56:20.341847    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 6
	I0805 16:56:20.341869    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:20.341954    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:56:20.342823    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for 7a:9b:8b:86:47:f3 in /var/db/dhcpd_leases ...
	I0805 16:56:20.342872    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:56:20.342884    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:56:20.342896    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:56:20.342905    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:56:20.342914    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:56:20.342920    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:56:20.342936    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:56:20.342950    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:56:20.342960    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:56:20.342969    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:56:20.342976    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:56:20.342988    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:56:20.342995    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:56:20.343003    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:56:20.343020    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:56:20.343034    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:56:20.343048    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:56:20.343067    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:56:22.343234    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 7
	I0805 16:56:22.343249    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:22.343386    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:56:22.344160    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for 7a:9b:8b:86:47:f3 in /var/db/dhcpd_leases ...
	I0805 16:56:22.344205    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:56:22.344216    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:56:22.344230    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:56:22.344239    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:56:22.344255    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:56:22.344269    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:56:22.344286    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:56:22.344298    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:56:22.344307    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:56:22.344328    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:56:22.344366    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:56:22.344374    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:56:22.344380    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:56:22.344389    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:56:22.344405    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:56:22.344417    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:56:22.344437    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:56:22.344445    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:56:24.346517    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 8
	I0805 16:56:24.346529    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:24.346539    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:56:24.347365    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for 7a:9b:8b:86:47:f3 in /var/db/dhcpd_leases ...
	I0805 16:56:24.347423    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:56:24.347434    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:56:24.347441    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:56:24.347450    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:56:24.347458    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:56:24.347464    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:56:24.347484    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:56:24.347498    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:56:24.347506    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:56:24.347513    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:56:24.347532    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:56:24.347539    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:56:24.347550    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:56:24.347558    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:56:24.347565    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:56:24.347573    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:56:24.347580    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:56:24.347596    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:56:26.349445    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 9
	I0805 16:56:26.349459    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:26.349564    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:56:26.350330    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for 7a:9b:8b:86:47:f3 in /var/db/dhcpd_leases ...
	I0805 16:56:26.350375    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:56:26.350386    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:56:26.350406    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:56:26.350414    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:56:26.350423    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:56:26.350435    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:56:26.350455    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:56:26.350467    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:56:26.350475    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:56:26.350483    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:56:26.350492    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:56:26.350499    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:56:26.350507    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:56:26.350517    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:56:26.350529    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:56:26.350546    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:56:26.350555    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:56:26.350566    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:56:28.350926    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 10
	I0805 16:56:28.350941    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:28.351082    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:56:28.351883    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for 7a:9b:8b:86:47:f3 in /var/db/dhcpd_leases ...
	I0805 16:56:28.351945    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:56:28.351957    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:56:28.351965    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:56:28.351972    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:56:28.351980    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:56:28.351986    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:56:28.351992    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:56:28.351998    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:56:28.352004    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:56:28.352011    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:56:28.352026    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:56:28.352038    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:56:28.352045    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:56:28.352054    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:56:28.352071    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:56:28.352079    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:56:28.352087    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:56:28.352096    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:56:30.354017    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 11
	I0805 16:56:30.354040    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:30.354096    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:56:30.354907    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for 7a:9b:8b:86:47:f3 in /var/db/dhcpd_leases ...
	I0805 16:56:30.354944    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:56:30.354952    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:56:30.354969    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:56:30.354979    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:56:30.354988    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:56:30.354994    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:56:30.355001    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:56:30.355007    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:56:30.355017    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:56:30.355034    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:56:30.355044    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:56:30.355052    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:56:30.355060    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:56:30.355066    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:56:30.355074    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:56:30.355080    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:56:30.355088    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:56:30.355115    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:56:32.356474    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 12
	I0805 16:56:32.356490    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:32.356563    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:56:32.357528    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for 7a:9b:8b:86:47:f3 in /var/db/dhcpd_leases ...
	I0805 16:56:32.357544    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:56:32.357551    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:56:32.357558    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:56:32.357564    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:56:32.357585    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:56:32.357598    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:56:32.357605    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:56:32.357612    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:56:32.357620    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:56:32.357628    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:56:32.357636    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:56:32.357646    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:56:32.357657    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:56:32.357666    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:56:32.357674    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:56:32.357682    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:56:32.357690    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:56:32.357710    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:56:34.359761    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 13
	I0805 16:56:34.359775    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:34.359867    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:56:34.360680    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for 7a:9b:8b:86:47:f3 in /var/db/dhcpd_leases ...
	I0805 16:56:34.360720    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:56:34.360731    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:56:34.360740    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:56:34.360759    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:56:34.360774    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:56:34.360782    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:56:34.360795    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:56:34.360807    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:56:34.360815    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:56:34.360821    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:56:34.360828    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:56:34.360852    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:56:34.360864    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:56:34.360872    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:56:34.360878    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:56:34.360892    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:56:34.360913    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:56:34.360925    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:56:36.361395    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 14
	I0805 16:56:36.361410    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:36.361501    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:56:36.362480    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for 7a:9b:8b:86:47:f3 in /var/db/dhcpd_leases ...
	I0805 16:56:36.362522    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:56:36.362534    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:56:36.362543    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:56:36.362551    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:56:36.362558    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:56:36.362566    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:56:36.362581    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:56:36.362604    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:56:36.362620    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:56:36.362634    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:56:36.362647    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:56:36.362657    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:56:36.362667    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:56:36.362674    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:56:36.362681    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:56:36.362690    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:56:36.362699    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:56:36.362706    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:56:38.364169    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 15
	I0805 16:56:38.364182    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:38.364258    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:56:38.365069    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for 7a:9b:8b:86:47:f3 in /var/db/dhcpd_leases ...
	I0805 16:56:38.365139    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:56:38.365149    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:56:38.365156    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:56:38.365165    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:56:38.365172    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:56:38.365178    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:56:38.365197    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:56:38.365211    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:56:38.365230    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:56:38.365241    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:56:38.365248    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:56:38.365257    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:56:38.365265    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:56:38.365274    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:56:38.365281    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:56:38.365289    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:56:38.365303    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:56:38.365318    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:56:40.365557    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 16
	I0805 16:56:40.365570    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:40.365650    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:56:40.366446    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for 7a:9b:8b:86:47:f3 in /var/db/dhcpd_leases ...
	I0805 16:56:40.366485    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:56:40.366494    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:56:40.366511    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:56:40.366522    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:56:40.366530    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:56:40.366560    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:56:40.366582    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:56:40.366591    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:56:40.366609    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:56:40.366623    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:56:40.366630    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:56:40.366638    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:56:40.366646    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:56:40.366653    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:56:40.366659    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:56:40.366680    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:56:40.366695    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:56:40.366727    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:56:42.366880    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 17
	I0805 16:56:42.366896    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:42.366979    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:56:42.367753    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for 7a:9b:8b:86:47:f3 in /var/db/dhcpd_leases ...
	I0805 16:56:42.367804    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:56:42.367817    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:56:42.367841    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:56:42.367851    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:56:42.367862    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:56:42.367871    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:56:42.367883    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:56:42.367893    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:56:42.367901    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:56:42.367909    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:56:42.367916    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:56:42.367924    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:56:42.367942    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:56:42.367948    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:56:42.367958    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:56:42.367966    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:56:42.367981    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:56:42.367993    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:56:44.369550    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 18
	I0805 16:56:44.369563    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:44.369665    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:56:44.370483    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for 7a:9b:8b:86:47:f3 in /var/db/dhcpd_leases ...
	I0805 16:56:44.370533    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:56:44.370549    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:56:44.370566    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:56:44.370578    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:56:44.370594    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:56:44.370601    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:56:44.370609    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:56:44.370616    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:56:44.370628    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:56:44.370636    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:56:44.370644    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:56:44.370651    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:56:44.370659    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:56:44.370665    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:56:44.370671    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:56:44.370680    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:56:44.370688    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:56:44.370695    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:56:46.372714    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 19
	I0805 16:56:46.372729    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:46.372815    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:56:46.373568    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for 7a:9b:8b:86:47:f3 in /var/db/dhcpd_leases ...
	I0805 16:56:46.373623    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:56:46.373638    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:56:46.373670    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:56:46.373691    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:56:46.373699    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:56:46.373709    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:56:46.373717    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:56:46.373726    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:56:46.373732    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:56:46.373740    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:56:46.373761    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:56:46.373773    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:56:46.373787    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:56:46.373796    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:56:46.373804    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:56:46.373812    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:56:46.373819    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:56:46.373825    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:56:48.374994    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 20
	I0805 16:56:48.375009    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:48.375118    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:56:48.375881    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for 7a:9b:8b:86:47:f3 in /var/db/dhcpd_leases ...
	I0805 16:56:48.375925    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:56:48.375935    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:56:48.375944    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:56:48.375951    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:56:48.375966    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:56:48.375977    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:56:48.375987    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:56:48.375994    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:56:48.376005    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:56:48.376014    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:56:48.376022    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:56:48.376030    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:56:48.376043    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:56:48.376071    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:56:48.376083    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:56:48.376091    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:56:48.376099    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:56:48.376107    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:56:50.378137    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 21
	I0805 16:56:50.378152    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:50.378274    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:56:50.379156    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for 7a:9b:8b:86:47:f3 in /var/db/dhcpd_leases ...
	I0805 16:56:50.379222    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:56:50.379232    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:56:50.379242    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:56:50.379250    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:56:50.379258    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:56:50.379271    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:56:50.379279    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:56:50.379286    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:56:50.379294    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:56:50.379301    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:56:50.379317    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:56:50.379331    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:56:50.379341    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:56:50.379354    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:56:50.379362    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:56:50.379367    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:56:50.379374    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:56:50.379385    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:56:52.380419    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 22
	I0805 16:56:52.380444    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:52.380538    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:56:52.381566    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for 7a:9b:8b:86:47:f3 in /var/db/dhcpd_leases ...
	I0805 16:56:52.381608    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:56:52.381623    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:56:52.381637    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:56:52.381646    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:56:52.381668    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:56:52.381676    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:56:52.381685    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:56:52.381692    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:56:52.381699    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:56:52.381707    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:56:52.381716    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:56:52.381724    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:56:52.381731    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:56:52.381738    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:56:52.381745    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:56:52.381754    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:56:52.381761    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:56:52.381769    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:56:54.383786    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 23
	I0805 16:56:54.383800    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:54.383894    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:56:54.384847    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for 7a:9b:8b:86:47:f3 in /var/db/dhcpd_leases ...
	I0805 16:56:54.384875    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:56:54.384883    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:56:54.384901    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:56:54.384908    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:56:54.384915    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:56:54.384922    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:56:54.384934    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:56:54.384949    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:56:54.384958    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:56:54.384966    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:56:54.384974    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:56:54.384982    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:56:54.384996    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:56:54.385010    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:56:54.385018    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:56:54.385024    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:56:54.385031    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:56:54.385039    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:56:56.385746    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 24
	I0805 16:56:56.385761    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:56.385892    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:56:56.386777    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for 7a:9b:8b:86:47:f3 in /var/db/dhcpd_leases ...
	I0805 16:56:56.386828    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:56:56.386839    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:56:56.386849    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:56:56.386855    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:56:56.386877    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:56:56.386889    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:56:56.386899    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:56:56.386908    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:56:56.386922    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:56:56.386931    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:56:56.386938    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:56:56.386946    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:56:56.386952    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:56:56.386960    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:56:56.386972    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:56:56.386980    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:56:56.386987    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:56:56.386993    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:56:58.389022    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 25
	I0805 16:56:58.389046    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:58.389168    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:56:58.390004    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for 7a:9b:8b:86:47:f3 in /var/db/dhcpd_leases ...
	I0805 16:56:58.390012    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:56:58.390043    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:56:58.390051    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:56:58.390058    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:56:58.390066    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:56:58.390077    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:56:58.390083    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:56:58.390090    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:56:58.390098    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:56:58.390105    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:56:58.390113    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:56:58.390122    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:56:58.390130    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:56:58.390137    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:56:58.390149    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:56:58.390157    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:56:58.390168    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:56:58.390177    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:57:00.392220    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 26
	I0805 16:57:00.392236    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:00.392302    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:57:00.393163    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for 7a:9b:8b:86:47:f3 in /var/db/dhcpd_leases ...
	I0805 16:57:00.393204    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:57:00.393214    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:57:00.393239    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:57:00.393249    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:57:00.393262    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:57:00.393276    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:57:00.393285    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:57:00.393292    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:57:00.393298    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:57:00.393305    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:57:00.393311    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:57:00.393318    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:57:00.393326    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:57:00.393332    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:57:00.393338    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:57:00.393345    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:57:00.393351    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:57:00.393358    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:57:02.393540    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 27
	I0805 16:57:02.393556    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:02.393662    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:57:02.394447    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for 7a:9b:8b:86:47:f3 in /var/db/dhcpd_leases ...
	I0805 16:57:02.394501    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:57:02.394515    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:57:02.394522    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:57:02.394530    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:57:02.394573    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:57:02.394586    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:57:02.394600    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:57:02.394613    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:57:02.394621    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:57:02.394630    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:57:02.394637    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:57:02.394645    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:57:02.394652    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:57:02.394660    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:57:02.394674    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:57:02.394683    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:57:02.394692    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:57:02.394705    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:57:04.395147    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 28
	I0805 16:57:04.395161    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:04.395254    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:57:04.396009    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for 7a:9b:8b:86:47:f3 in /var/db/dhcpd_leases ...
	I0805 16:57:04.396082    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:57:04.396093    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:57:04.396105    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:57:04.396117    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:57:04.396129    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:57:04.396135    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:57:04.396142    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:57:04.396150    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:57:04.396166    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:57:04.396177    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:57:04.396185    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:57:04.396191    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:57:04.396207    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:57:04.396220    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:57:04.396229    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:57:04.396237    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:57:04.396245    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:57:04.396253    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:57:06.396972    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 29
	I0805 16:57:06.396990    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:06.397064    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:57:06.397851    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for 7a:9b:8b:86:47:f3 in /var/db/dhcpd_leases ...
	I0805 16:57:06.397882    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:57:06.397894    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:57:06.397914    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:57:06.397921    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:57:06.397930    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:57:06.397935    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:57:06.397944    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:57:06.397950    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:57:06.397956    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:57:06.397964    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:57:06.397970    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:57:06.397978    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:57:06.398001    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:57:06.398037    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:57:06.398048    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:57:06.398061    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:57:06.398069    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:57:06.398077    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:57:08.398932    6352 client.go:171] duration metric: took 1m0.843561574s to LocalClient.Create
	I0805 16:57:10.401044    6352 start.go:128] duration metric: took 1m2.877191469s to createHost
	I0805 16:57:10.401063    6352 start.go:83] releasing machines lock for "docker-flags-878000", held for 1m2.877322051s
	W0805 16:57:10.401085    6352 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7a:9b:8b:86:47:f3
	I0805 16:57:10.401396    6352 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:57:10.401414    6352 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:57:10.409901    6352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53713
	I0805 16:57:10.410251    6352 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:57:10.410577    6352 main.go:141] libmachine: Using API Version  1
	I0805 16:57:10.410586    6352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:57:10.410839    6352 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:57:10.411245    6352 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:57:10.411266    6352 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:57:10.419821    6352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53715
	I0805 16:57:10.420194    6352 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:57:10.420518    6352 main.go:141] libmachine: Using API Version  1
	I0805 16:57:10.420529    6352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:57:10.420767    6352 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:57:10.420899    6352 main.go:141] libmachine: (docker-flags-878000) Calling .GetState
	I0805 16:57:10.421009    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:10.421076    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:57:10.422048    6352 main.go:141] libmachine: (docker-flags-878000) Calling .DriverName
	I0805 16:57:10.443187    6352 out.go:177] * Deleting "docker-flags-878000" in hyperkit ...
	I0805 16:57:10.464437    6352 main.go:141] libmachine: (docker-flags-878000) Calling .Remove
	I0805 16:57:10.464602    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:10.464620    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:10.464700    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:57:10.465635    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:10.465683    6352 main.go:141] libmachine: (docker-flags-878000) DBG | waiting for graceful shutdown
	I0805 16:57:11.466008    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:11.466107    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:57:11.467021    6352 main.go:141] libmachine: (docker-flags-878000) DBG | waiting for graceful shutdown
	I0805 16:57:12.467408    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:12.467497    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:57:12.469234    6352 main.go:141] libmachine: (docker-flags-878000) DBG | waiting for graceful shutdown
	I0805 16:57:13.471305    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:13.471375    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:57:13.472112    6352 main.go:141] libmachine: (docker-flags-878000) DBG | waiting for graceful shutdown
	I0805 16:57:14.472896    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:14.472959    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:57:14.473644    6352 main.go:141] libmachine: (docker-flags-878000) DBG | waiting for graceful shutdown
	I0805 16:57:15.475353    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:15.475433    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6376
	I0805 16:57:15.476608    6352 main.go:141] libmachine: (docker-flags-878000) DBG | sending sigkill
	I0805 16:57:15.476617    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:15.486657    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:57:15 WARN : hyperkit: failed to read stderr: EOF
	I0805 16:57:15.486679    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:57:15 WARN : hyperkit: failed to read stdout: EOF
	W0805 16:57:15.504353    6352 out.go:239] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7a:9b:8b:86:47:f3
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7a:9b:8b:86:47:f3
	I0805 16:57:15.504371    6352 start.go:729] Will try again in 5 seconds ...
	I0805 16:57:20.504493    6352 start.go:360] acquireMachinesLock for docker-flags-878000: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:58:13.331494    6352 start.go:364] duration metric: took 52.795910863s to acquireMachinesLock for "docker-flags-878000"
	I0805 16:58:13.331537    6352 start.go:93] Provisioning new machine with config: &{Name:docker-flags-878000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSH
Key: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:docker-flags-878000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountI
P: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:58:13.331610    6352 start.go:125] createHost starting for "" (driver="hyperkit")
	I0805 16:58:13.373742    6352 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0805 16:58:13.373819    6352 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:58:13.373852    6352 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:58:13.383329    6352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53719
	I0805 16:58:13.383733    6352 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:58:13.384074    6352 main.go:141] libmachine: Using API Version  1
	I0805 16:58:13.384087    6352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:58:13.384296    6352 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:58:13.384416    6352 main.go:141] libmachine: (docker-flags-878000) Calling .GetMachineName
	I0805 16:58:13.384499    6352 main.go:141] libmachine: (docker-flags-878000) Calling .DriverName
	I0805 16:58:13.384597    6352 start.go:159] libmachine.API.Create for "docker-flags-878000" (driver="hyperkit")
	I0805 16:58:13.384610    6352 client.go:168] LocalClient.Create starting
	I0805 16:58:13.384640    6352 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem
	I0805 16:58:13.384695    6352 main.go:141] libmachine: Decoding PEM data...
	I0805 16:58:13.384705    6352 main.go:141] libmachine: Parsing certificate...
	I0805 16:58:13.384746    6352 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem
	I0805 16:58:13.384784    6352 main.go:141] libmachine: Decoding PEM data...
	I0805 16:58:13.384796    6352 main.go:141] libmachine: Parsing certificate...
	I0805 16:58:13.384808    6352 main.go:141] libmachine: Running pre-create checks...
	I0805 16:58:13.384814    6352 main.go:141] libmachine: (docker-flags-878000) Calling .PreCreateCheck
	I0805 16:58:13.384887    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:58:13.384916    6352 main.go:141] libmachine: (docker-flags-878000) Calling .GetConfigRaw
	I0805 16:58:13.394969    6352 main.go:141] libmachine: Creating machine...
	I0805 16:58:13.394978    6352 main.go:141] libmachine: (docker-flags-878000) Calling .Create
	I0805 16:58:13.395072    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:58:13.395199    6352 main.go:141] libmachine: (docker-flags-878000) DBG | I0805 16:58:13.395066    6418 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:58:13.395266    6352 main.go:141] libmachine: (docker-flags-878000) Downloading /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1122/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 16:58:13.813682    6352 main.go:141] libmachine: (docker-flags-878000) DBG | I0805 16:58:13.813585    6418 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/id_rsa...
	I0805 16:58:14.369135    6352 main.go:141] libmachine: (docker-flags-878000) DBG | I0805 16:58:14.369054    6418 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/docker-flags-878000.rawdisk...
	I0805 16:58:14.369162    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Writing magic tar header
	I0805 16:58:14.369178    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Writing SSH key tar header
	I0805 16:58:14.369734    6352 main.go:141] libmachine: (docker-flags-878000) DBG | I0805 16:58:14.369697    6418 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000 ...
	I0805 16:58:14.744747    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:58:14.744768    6352 main.go:141] libmachine: (docker-flags-878000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/hyperkit.pid
	I0805 16:58:14.744782    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Using UUID 5de53fcd-381f-4fe2-8413-0b184c02b12a
	I0805 16:58:14.770604    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Generated MAC c2:80:69:4f:a1:e8
	I0805 16:58:14.770621    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-878000
	I0805 16:58:14.770655    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:58:14 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5de53fcd-381f-4fe2-8413-0b184c02b12a", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0805 16:58:14.770682    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:58:14 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"5de53fcd-381f-4fe2-8413-0b184c02b12a", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", pro
cess:(*os.Process)(nil)}
	I0805 16:58:14.770729    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:58:14 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "5de53fcd-381f-4fe2-8413-0b184c02b12a", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/docker-flags-878000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/bzimage,/Users/jenkins/m
inikube-integration/19373-1122/.minikube/machines/docker-flags-878000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-878000"}
	I0805 16:58:14.770772    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:58:14 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 5de53fcd-381f-4fe2-8413-0b184c02b12a -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/docker-flags-878000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags
-878000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=docker-flags-878000"
	I0805 16:58:14.770785    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:58:14 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:58:14.773902    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:58:14 DEBUG: hyperkit: Pid is 6433
	I0805 16:58:14.774530    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 0
	I0805 16:58:14.774551    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:58:14.774611    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6433
	I0805 16:58:14.775513    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for c2:80:69:4f:a1:e8 in /var/db/dhcpd_leases ...
	I0805 16:58:14.775580    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:58:14.775593    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:58:14.775612    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:58:14.775623    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:58:14.775633    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:58:14.775643    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:58:14.775656    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:58:14.775666    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:58:14.775673    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:58:14.775680    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:58:14.775708    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:58:14.775718    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:58:14.775727    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:58:14.775736    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:58:14.775759    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:58:14.775776    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:58:14.775786    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:58:14.775796    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:58:14.781750    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:58:14 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:58:14.789798    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:58:14 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/docker-flags-878000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:58:14.790743    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:58:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:58:14.790769    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:58:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:58:14.790784    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:58:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:58:14.790800    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:58:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:58:15.170128    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:58:15 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:58:15.170145    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:58:15 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:58:15.284928    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:58:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:58:15.284951    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:58:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:58:15.284964    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:58:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:58:15.284997    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:58:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:58:15.285825    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:58:15 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:58:15.285837    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:58:15 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:58:16.778045    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 1
	I0805 16:58:16.778061    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:58:16.778136    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6433
	I0805 16:58:16.778976    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for c2:80:69:4f:a1:e8 in /var/db/dhcpd_leases ...
	I0805 16:58:16.779006    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:58:16.779016    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:58:16.779025    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:58:16.779032    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:58:16.779043    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:58:16.779053    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:58:16.779060    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:58:16.779128    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:58:16.779163    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:58:16.779195    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:58:16.779210    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:58:16.779218    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:58:16.779226    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:58:16.779236    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:58:16.779243    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:58:16.779251    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:58:16.779260    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:58:16.779268    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:58:18.781091    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 2
	I0805 16:58:18.781108    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:58:18.781174    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6433
	I0805 16:58:18.782143    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for c2:80:69:4f:a1:e8 in /var/db/dhcpd_leases ...
	I0805 16:58:18.782179    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:58:18.782191    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:58:18.782200    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:58:18.782212    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:58:18.782220    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:58:18.782226    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:58:18.782242    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:58:18.782257    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:58:18.782272    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:58:18.782280    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:58:18.782290    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:58:18.782299    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:58:18.782306    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:58:18.782315    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:58:18.782327    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:58:18.782338    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:58:18.782349    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:58:18.782356    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:58:20.708251    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:58:20 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0805 16:58:20.708529    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:58:20 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0805 16:58:20.708540    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:58:20 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0805 16:58:20.734582    6352 main.go:141] libmachine: (docker-flags-878000) DBG | 2024/08/05 16:58:20 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:58:20.785478    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 3
	I0805 16:58:20.785530    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:58:20.785710    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6433
	I0805 16:58:20.787198    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for c2:80:69:4f:a1:e8 in /var/db/dhcpd_leases ...
	I0805 16:58:20.787296    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:58:20.787313    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:58:20.787327    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:58:20.787346    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:58:20.787373    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:58:20.787388    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:58:20.787400    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:58:20.787428    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:58:20.787457    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:58:20.787474    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:58:20.787486    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:58:20.787496    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:58:20.787518    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:58:20.787535    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:58:20.787547    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:58:20.787557    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:58:20.787567    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:58:20.787578    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:58:22.789235    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 4
	I0805 16:58:22.789250    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:58:22.789342    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6433
	I0805 16:58:22.790145    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for c2:80:69:4f:a1:e8 in /var/db/dhcpd_leases ...
	I0805 16:58:22.790196    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:58:22.790206    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:58:22.790215    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:58:22.790221    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:58:22.790243    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:58:22.790249    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:58:22.790257    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:58:22.790262    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:58:22.790278    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:58:22.790291    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:58:22.790299    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:58:22.790304    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:58:22.790312    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:58:22.790326    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:58:22.790334    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:58:22.790342    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:58:22.790363    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:58:22.790375    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:58:24.793104    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 5
	I0805 16:58:24.793127    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:58:24.793232    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6433
	I0805 16:58:24.793996    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for c2:80:69:4f:a1:e8 in /var/db/dhcpd_leases ...
	I0805 16:58:24.794047    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:58:24.794058    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:58:24.794069    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:58:24.794078    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:58:24.794085    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:58:24.794112    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:58:24.794130    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:58:24.794144    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:58:24.794152    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:58:24.794168    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:58:24.794183    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:58:24.794194    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:58:24.794204    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:58:24.794213    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:58:24.794223    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:58:24.794231    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:58:24.794238    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:58:24.794246    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:58:26.796917    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 6
	I0805 16:58:26.796930    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:58:26.797046    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6433
	I0805 16:58:26.797814    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for c2:80:69:4f:a1:e8 in /var/db/dhcpd_leases ...
	I0805 16:58:26.797849    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:58:26.797857    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:58:26.797868    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:58:26.797874    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:58:26.797881    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:58:26.797890    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:58:26.797898    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:58:26.797904    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:58:26.797910    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:58:26.797917    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:58:26.797924    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:58:26.797944    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:58:26.797964    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:58:26.797976    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:58:26.797984    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:58:26.797992    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:58:26.798002    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:58:26.798011    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:58:28.800636    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 7
	I0805 16:58:28.800652    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:58:28.800793    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6433
	I0805 16:58:28.801634    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for c2:80:69:4f:a1:e8 in /var/db/dhcpd_leases ...
	I0805 16:58:28.801679    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:58:28.801691    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:58:28.801723    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:58:28.801733    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:58:28.801744    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:58:28.801754    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:58:28.801768    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:58:28.801775    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:58:28.801783    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:58:28.801794    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:58:28.801805    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:58:28.801813    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:58:28.801821    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:58:28.801829    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:58:28.801837    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:58:28.801844    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:58:28.801852    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:58:28.801863    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:58:30.804408    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 8
	I0805 16:58:30.804424    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:58:30.804468    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6433
	I0805 16:58:30.805233    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for c2:80:69:4f:a1:e8 in /var/db/dhcpd_leases ...
	I0805 16:58:30.805291    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:58:30.805300    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:58:30.805313    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:58:30.805326    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:58:30.805350    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:58:30.805362    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:58:30.805377    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:58:30.805385    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:58:30.805393    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:58:30.805401    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:58:30.805409    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:58:30.805419    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:58:30.805427    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:58:30.805434    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:58:30.805447    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:58:30.805457    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:58:30.805465    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:58:30.805479    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:58:32.806866    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 9
	I0805 16:58:32.806879    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:58:32.806962    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6433
	I0805 16:58:32.807769    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for c2:80:69:4f:a1:e8 in /var/db/dhcpd_leases ...
	I0805 16:58:32.807784    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:58:32.807809    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:58:32.807821    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:58:32.807831    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:58:32.807837    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:58:32.807853    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:58:32.807861    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:58:32.807867    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:58:32.807873    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:58:32.807879    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:58:32.807890    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:58:32.807898    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:58:32.807904    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:58:32.807911    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:58:32.807919    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:58:32.807926    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:58:32.807934    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:58:32.807942    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:58:34.809691    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 10
	I0805 16:58:34.809707    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:58:34.809763    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6433
	I0805 16:58:34.810578    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for c2:80:69:4f:a1:e8 in /var/db/dhcpd_leases ...
	I0805 16:58:34.810617    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:58:34.810625    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:58:34.810635    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:58:34.810642    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:58:34.810649    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:58:34.810655    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:58:34.810685    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:58:34.810696    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:58:34.810704    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:58:34.810712    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:58:34.810727    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:58:34.810735    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:58:34.810742    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:58:34.810754    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:58:34.810766    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:58:34.810778    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:58:34.810785    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:58:34.810801    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:58:36.811195    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 11
	I0805 16:58:36.811211    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:58:36.811251    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6433
	I0805 16:58:36.812069    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for c2:80:69:4f:a1:e8 in /var/db/dhcpd_leases ...
	I0805 16:58:36.812113    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:58:36.812154    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:58:36.812167    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:58:36.812180    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:58:36.812192    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:58:36.812203    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:58:36.812213    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:58:36.812239    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:58:36.812255    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:58:36.812263    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:58:36.812280    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:58:36.812290    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:58:36.812306    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:58:36.812322    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:58:36.812335    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:58:36.812346    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:58:36.812355    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:58:36.812363    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:58:38.814559    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 12
	I0805 16:58:38.814576    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:58:38.814638    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6433
	I0805 16:58:38.815456    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for c2:80:69:4f:a1:e8 in /var/db/dhcpd_leases ...
	I0805 16:58:38.815465    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:58:38.815481    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:58:38.815489    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:58:38.815496    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:58:38.815502    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:58:38.815508    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:58:38.815518    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:58:38.815540    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:58:38.815554    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:58:38.815562    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:58:38.815576    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:58:38.815590    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:58:38.815597    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:58:38.815606    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:58:38.815616    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:58:38.815626    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:58:38.815636    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:58:38.815644    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:58:40.817135    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 13
	I0805 16:58:40.817146    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:58:40.817287    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6433
	I0805 16:58:40.818262    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for c2:80:69:4f:a1:e8 in /var/db/dhcpd_leases ...
	I0805 16:58:40.818303    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:58:40.818318    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:58:40.818329    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:58:40.818335    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:58:40.818347    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:58:40.818353    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:58:40.818360    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:58:40.818374    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:58:40.818386    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:58:40.818400    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:58:40.818410    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:58:40.818419    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:58:40.818428    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:58:40.818438    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:58:40.818445    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:58:40.818471    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:58:40.818480    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:58:40.818489    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:58:42.819468    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 14
	I0805 16:58:42.819483    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:58:42.819583    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6433
	I0805 16:58:42.820365    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for c2:80:69:4f:a1:e8 in /var/db/dhcpd_leases ...
	I0805 16:58:42.820399    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:58:42.820409    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:58:42.820420    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:58:42.820430    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:58:42.820436    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:58:42.820446    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:58:42.820461    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:58:42.820473    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:58:42.820480    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:58:42.820489    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:58:42.820506    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:58:42.820514    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:58:42.820521    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:58:42.820530    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:58:42.820537    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:58:42.820544    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:58:42.820551    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:58:42.820558    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:58:44.820922    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 15
	I0805 16:58:44.820938    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:58:44.821075    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6433
	I0805 16:58:44.822073    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for c2:80:69:4f:a1:e8 in /var/db/dhcpd_leases ...
	I0805 16:58:44.822118    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:58:44.822130    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:58:44.822138    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:58:44.822151    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:58:44.822171    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:58:44.822182    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:58:44.822190    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:58:44.822198    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:58:44.822210    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:58:44.822216    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:58:44.822228    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:58:44.822242    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:58:44.822254    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:58:44.822264    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:58:44.822280    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:58:44.822292    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:58:44.822309    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:58:44.822319    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:58:46.822870    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 16
	I0805 16:58:46.822888    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:58:46.822963    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6433
	I0805 16:58:46.823738    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for c2:80:69:4f:a1:e8 in /var/db/dhcpd_leases ...
	I0805 16:58:46.823792    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:58:46.823802    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:58:46.823817    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:58:46.823825    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:58:46.823833    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:58:46.823843    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:58:46.823860    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:58:46.823874    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:58:46.823882    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:58:46.823890    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:58:46.823897    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:58:46.823904    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:58:46.823911    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:58:46.823919    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:58:46.823926    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:58:46.823962    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:58:46.823973    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:58:46.823983    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:58:48.824516    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 17
	I0805 16:58:48.824530    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:58:48.824600    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6433
	I0805 16:58:48.825372    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for c2:80:69:4f:a1:e8 in /var/db/dhcpd_leases ...
	I0805 16:58:48.825399    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:58:48.825427    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:58:48.825438    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:58:48.825446    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:58:48.825455    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:58:48.825463    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:58:48.825472    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:58:48.825487    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:58:48.825497    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:58:48.825504    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:58:48.825514    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:58:48.825523    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:58:48.825537    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:58:48.825545    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:58:48.825553    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:58:48.825561    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:58:48.825568    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:58:48.825575    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:58:50.827225    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 18
	I0805 16:58:50.827239    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:58:50.827306    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6433
	I0805 16:58:50.828098    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for c2:80:69:4f:a1:e8 in /var/db/dhcpd_leases ...
	I0805 16:58:50.828142    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:58:50.828154    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:58:50.828167    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:58:50.828173    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:58:50.828181    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:58:50.828188    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:58:50.828194    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:58:50.828204    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:58:50.828221    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:58:50.828235    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:58:50.828245    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:58:50.828259    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:58:50.828284    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:58:50.828297    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:58:50.828306    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:58:50.828320    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:58:50.828328    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:58:50.828340    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:58:52.830490    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 19
	I0805 16:58:52.830505    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:58:52.830600    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6433
	I0805 16:58:52.831421    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for c2:80:69:4f:a1:e8 in /var/db/dhcpd_leases ...
	I0805 16:58:52.831471    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:58:52.831485    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:58:52.831495    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:58:52.831502    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:58:52.831509    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:58:52.831515    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:58:52.831521    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:58:52.831535    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:58:52.831543    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:58:52.831565    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:58:52.831577    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:58:52.831593    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:58:52.831605    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:58:52.831616    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:58:52.831624    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:58:52.831632    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:58:52.831640    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:58:52.831657    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:58:54.833570    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 20
	I0805 16:58:54.833583    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:58:54.833636    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6433
	I0805 16:58:54.834459    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for c2:80:69:4f:a1:e8 in /var/db/dhcpd_leases ...
	I0805 16:58:54.834508    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:58:54.834518    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:58:54.834528    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:58:54.834534    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:58:54.834545    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:58:54.834552    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:58:54.834558    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:58:54.834564    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:58:54.834571    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:58:54.834579    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:58:54.834586    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:58:54.834591    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:58:54.834605    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:58:54.834618    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:58:54.834626    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:58:54.834635    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:58:54.834642    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:58:54.834648    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:58:56.834849    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 21
	I0805 16:58:56.834863    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:58:56.834958    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6433
	I0805 16:58:56.835709    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for c2:80:69:4f:a1:e8 in /var/db/dhcpd_leases ...
	I0805 16:58:56.835757    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:58:56.835767    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:58:56.835777    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:58:56.835784    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:58:56.835791    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:58:56.835797    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:58:56.835807    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:58:56.835814    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:58:56.835823    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:58:56.835831    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:58:56.835844    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:58:56.835854    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:58:56.835861    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:58:56.835869    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:58:56.835876    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:58:56.835884    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:58:56.835900    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:58:56.835914    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:58:58.836828    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 22
	I0805 16:58:58.836841    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:58:58.836931    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6433
	I0805 16:58:58.837919    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for c2:80:69:4f:a1:e8 in /var/db/dhcpd_leases ...
	I0805 16:58:58.837977    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:58:58.837997    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:58:58.838005    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:58:58.838012    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:58:58.838026    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:58:58.838035    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:58:58.838042    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:58:58.838048    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:58:58.838060    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:58:58.838072    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:58:58.838080    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:58:58.838092    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:58:58.838100    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:58:58.838110    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:58:58.838117    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:58:58.838124    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:58:58.838133    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:58:58.838141    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:59:00.840158    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 23
	I0805 16:59:00.840173    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:59:00.840210    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6433
	I0805 16:59:00.841084    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for c2:80:69:4f:a1:e8 in /var/db/dhcpd_leases ...
	I0805 16:59:00.841132    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:59:00.841158    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:59:00.841192    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:59:00.841207    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:59:00.841215    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:59:00.841223    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:59:00.841230    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:59:00.841236    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:59:00.841243    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:59:00.841250    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:59:00.841256    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:59:00.841261    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:59:00.841275    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:59:00.841288    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:59:00.841305    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:59:00.841318    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:59:00.841325    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:59:00.841331    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:59:02.842971    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 24
	I0805 16:59:02.842994    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:59:02.843096    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6433
	I0805 16:59:02.843863    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for c2:80:69:4f:a1:e8 in /var/db/dhcpd_leases ...
	I0805 16:59:02.843901    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:59:02.843909    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:59:02.843917    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:59:02.843926    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:59:02.843933    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:59:02.843939    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:59:02.843946    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:59:02.843959    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:59:02.843976    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:59:02.843989    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:59:02.844007    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:59:02.844018    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:59:02.844027    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:59:02.844035    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:59:02.844048    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:59:02.844056    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:59:02.844064    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:59:02.844070    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:59:04.844472    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 25
	I0805 16:59:04.844486    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:59:04.844605    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6433
	I0805 16:59:04.845450    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for c2:80:69:4f:a1:e8 in /var/db/dhcpd_leases ...
	I0805 16:59:04.845506    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:59:04.845514    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:59:04.845529    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:59:04.845539    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:59:04.845548    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:59:04.845555    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:59:04.845561    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:59:04.845567    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:59:04.845573    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:59:04.845582    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:59:04.845590    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:59:04.845597    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:59:04.845603    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:59:04.845613    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:59:04.845621    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:59:04.845637    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:59:04.845650    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:59:04.845660    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:59:06.847717    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 26
	I0805 16:59:06.847733    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:59:06.847812    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6433
	I0805 16:59:06.848590    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for c2:80:69:4f:a1:e8 in /var/db/dhcpd_leases ...
	I0805 16:59:06.848655    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:59:06.848669    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:59:06.848693    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:59:06.848705    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:59:06.848729    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:59:06.848745    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:59:06.848753    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:59:06.848761    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:59:06.848768    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:59:06.848775    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:59:06.848783    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:59:06.848791    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:59:06.848801    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:59:06.848809    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:59:06.848821    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:59:06.848827    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:59:06.848844    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:59:06.848857    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:59:08.850829    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 27
	I0805 16:59:08.850840    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:59:08.850920    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6433
	I0805 16:59:08.851718    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for c2:80:69:4f:a1:e8 in /var/db/dhcpd_leases ...
	I0805 16:59:08.851734    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:59:08.851749    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:59:08.851769    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:59:08.851783    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:59:08.851791    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:59:08.851799    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:59:08.851807    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:59:08.851817    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:59:08.851829    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:59:08.851838    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:59:08.851846    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:59:08.851852    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:59:08.851859    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:59:08.851867    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:59:08.851873    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:59:08.851880    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:59:08.851896    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:59:08.851909    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:59:10.853957    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 28
	I0805 16:59:10.853976    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:59:10.854053    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6433
	I0805 16:59:10.854841    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for c2:80:69:4f:a1:e8 in /var/db/dhcpd_leases ...
	I0805 16:59:10.854897    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:59:10.854907    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:59:10.854917    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:59:10.854924    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:59:10.854931    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:59:10.854937    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:59:10.854944    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:59:10.854950    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:59:10.854957    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:59:10.854962    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:59:10.854971    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:59:10.854979    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:59:10.854995    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:59:10.855008    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:59:10.855016    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:59:10.855024    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:59:10.855031    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:59:10.855042    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:59:12.855873    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Attempt 29
	I0805 16:59:12.855891    6352 main.go:141] libmachine: (docker-flags-878000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:59:12.855993    6352 main.go:141] libmachine: (docker-flags-878000) DBG | hyperkit pid from json: 6433
	I0805 16:59:12.856777    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Searching for c2:80:69:4f:a1:e8 in /var/db/dhcpd_leases ...
	I0805 16:59:12.856811    6352 main.go:141] libmachine: (docker-flags-878000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:59:12.856834    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:59:12.856843    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:59:12.856852    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:59:12.856870    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:59:12.856892    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:59:12.856904    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:59:12.856917    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:59:12.856927    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:59:12.856934    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:59:12.856941    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:59:12.856948    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:59:12.856958    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:59:12.856967    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:59:12.856975    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:59:12.856995    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:59:12.857007    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:59:12.857016    6352 main.go:141] libmachine: (docker-flags-878000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:59:14.858313    6352 client.go:171] duration metric: took 1m1.462068487s to LocalClient.Create
	I0805 16:59:16.858643    6352 start.go:128] duration metric: took 1m3.515243787s to createHost
	I0805 16:59:16.858656    6352 start.go:83] releasing machines lock for "docker-flags-878000", held for 1m3.515431336s
	W0805 16:59:16.858751    6352 out.go:239] * Failed to start hyperkit VM. Running "minikube delete -p docker-flags-878000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for c2:80:69:4f:a1:e8
	* Failed to start hyperkit VM. Running "minikube delete -p docker-flags-878000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for c2:80:69:4f:a1:e8
	I0805 16:59:16.900868    6352 out.go:177] 
	W0805 16:59:16.921815    6352 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for c2:80:69:4f:a1:e8
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for c2:80:69:4f:a1:e8
	W0805 16:59:16.921827    6352 out.go:239] * 
	* 
	W0805 16:59:16.922450    6352 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:59:16.984800    6352 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-amd64 start -p docker-flags-878000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-878000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-878000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 50 (177.192932ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-878000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-878000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 50
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"\n\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-878000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p docker-flags-878000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 50 (170.108145ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node docker-flags-878000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-amd64 -p docker-flags-878000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 50
docker_test.go:73: expected "out/minikube-darwin-amd64 -p docker-flags-878000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "\n\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-08-05 16:59:17.419303 -0700 PDT m=+4336.948451096
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-878000 -n docker-flags-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p docker-flags-878000 -n docker-flags-878000: exit status 7 (79.706458ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 16:59:17.496951    6472 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0805 16:59:17.496977    6472 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-878000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "docker-flags-878000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-878000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-878000: (5.239825679s)
--- FAIL: TestDockerFlags (252.67s)

                                                
                                    
x
+
TestForceSystemdFlag (252.26s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-556000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-flag-556000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (4m6.602664221s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-556000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-flag-556000" primary control-plane node in "force-systemd-flag-556000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-flag-556000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:54:06.940412    6313 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:54:06.940601    6313 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:54:06.940607    6313 out.go:304] Setting ErrFile to fd 2...
	I0805 16:54:06.940610    6313 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:54:06.940797    6313 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:54:06.942262    6313 out.go:298] Setting JSON to false
	I0805 16:54:06.964936    6313 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":5017,"bootTime":1722897029,"procs":442,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0805 16:54:06.965025    6313 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:54:06.987120    6313 out.go:177] * [force-systemd-flag-556000] minikube v1.33.1 on Darwin 14.5
	I0805 16:54:07.027936    6313 notify.go:220] Checking for updates...
	I0805 16:54:07.049054    6313 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:54:07.070076    6313 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:54:07.090867    6313 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0805 16:54:07.111048    6313 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:54:07.132043    6313 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:54:07.152836    6313 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:54:07.173564    6313 config.go:182] Loaded profile config "force-systemd-env-870000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:54:07.173656    6313 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:54:07.201984    6313 out.go:177] * Using the hyperkit driver based on user configuration
	I0805 16:54:07.242009    6313 start.go:297] selected driver: hyperkit
	I0805 16:54:07.242022    6313 start.go:901] validating driver "hyperkit" against <nil>
	I0805 16:54:07.242033    6313 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:54:07.245051    6313 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:54:07.245163    6313 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19373-1122/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0805 16:54:07.253535    6313 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0805 16:54:07.257370    6313 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:54:07.257400    6313 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0805 16:54:07.257436    6313 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 16:54:07.257623    6313 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 16:54:07.257646    6313 cni.go:84] Creating CNI manager for ""
	I0805 16:54:07.257664    6313 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 16:54:07.257672    6313 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 16:54:07.257747    6313 start.go:340] cluster config:
	{Name:force-systemd-flag-556000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-556000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:clus
ter.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:54:07.257849    6313 iso.go:125] acquiring lock: {Name:mk71e8d40232ece83c91dc82184f03ab93aee56e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:54:07.299776    6313 out.go:177] * Starting "force-systemd-flag-556000" primary control-plane node in "force-systemd-flag-556000" cluster
	I0805 16:54:07.319980    6313 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:54:07.320017    6313 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0805 16:54:07.320029    6313 cache.go:56] Caching tarball of preloaded images
	I0805 16:54:07.320142    6313 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:54:07.320151    6313 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:54:07.320233    6313 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/force-systemd-flag-556000/config.json ...
	I0805 16:54:07.320250    6313 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/force-systemd-flag-556000/config.json: {Name:mk2a76d9276cc63d3411aca6f8e3a878d46932d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:54:07.320546    6313 start.go:360] acquireMachinesLock for force-systemd-flag-556000: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:55:04.246043    6313 start.go:364] duration metric: took 56.925281716s to acquireMachinesLock for "force-systemd-flag-556000"
	I0805 16:55:04.246089    6313 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-556000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-556000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:55:04.246144    6313 start.go:125] createHost starting for "" (driver="hyperkit")
	I0805 16:55:04.268788    6313 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0805 16:55:04.268912    6313 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:55:04.268946    6313 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:55:04.277450    6313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53691
	I0805 16:55:04.277798    6313 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:55:04.278248    6313 main.go:141] libmachine: Using API Version  1
	I0805 16:55:04.278261    6313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:55:04.278484    6313 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:55:04.278604    6313 main.go:141] libmachine: (force-systemd-flag-556000) Calling .GetMachineName
	I0805 16:55:04.278709    6313 main.go:141] libmachine: (force-systemd-flag-556000) Calling .DriverName
	I0805 16:55:04.278813    6313 start.go:159] libmachine.API.Create for "force-systemd-flag-556000" (driver="hyperkit")
	I0805 16:55:04.278834    6313 client.go:168] LocalClient.Create starting
	I0805 16:55:04.278865    6313 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem
	I0805 16:55:04.278917    6313 main.go:141] libmachine: Decoding PEM data...
	I0805 16:55:04.278932    6313 main.go:141] libmachine: Parsing certificate...
	I0805 16:55:04.278999    6313 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem
	I0805 16:55:04.279051    6313 main.go:141] libmachine: Decoding PEM data...
	I0805 16:55:04.279062    6313 main.go:141] libmachine: Parsing certificate...
	I0805 16:55:04.279075    6313 main.go:141] libmachine: Running pre-create checks...
	I0805 16:55:04.279085    6313 main.go:141] libmachine: (force-systemd-flag-556000) Calling .PreCreateCheck
	I0805 16:55:04.279210    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:55:04.279378    6313 main.go:141] libmachine: (force-systemd-flag-556000) Calling .GetConfigRaw
	I0805 16:55:04.310613    6313 main.go:141] libmachine: Creating machine...
	I0805 16:55:04.310622    6313 main.go:141] libmachine: (force-systemd-flag-556000) Calling .Create
	I0805 16:55:04.310726    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:55:04.310892    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | I0805 16:55:04.310726    6337 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:55:04.310955    6313 main.go:141] libmachine: (force-systemd-flag-556000) Downloading /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1122/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 16:55:04.731336    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | I0805 16:55:04.731238    6337 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/id_rsa...
	I0805 16:55:05.039850    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | I0805 16:55:05.039720    6337 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/force-systemd-flag-556000.rawdisk...
	I0805 16:55:05.039873    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Writing magic tar header
	I0805 16:55:05.039884    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Writing SSH key tar header
	I0805 16:55:05.040391    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | I0805 16:55:05.040350    6337 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000 ...
	I0805 16:55:05.415296    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:55:05.415316    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/hyperkit.pid
	I0805 16:55:05.415330    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Using UUID 7f00fba6-7274-453b-939a-2e44bb28053f
	I0805 16:55:05.445504    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Generated MAC 9e:3d:c2:d1:3:e
	I0805 16:55:05.445528    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-556000
	I0805 16:55:05.445568    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:55:05 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7f00fba6-7274-453b-939a-2e44bb28053f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000198630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:55:05.445600    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:55:05 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"7f00fba6-7274-453b-939a-2e44bb28053f", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000198630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:55:05.445680    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:55:05 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "7f00fba6-7274-453b-939a-2e44bb28053f", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/force-systemd-flag-556000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/fo
rce-systemd-flag-556000/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-556000"}
	I0805 16:55:05.445739    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:55:05 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 7f00fba6-7274-453b-939a-2e44bb28053f -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/force-systemd-flag-556000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/bzimage,/Users/jenkins/minikube-integr
ation/19373-1122/.minikube/machines/force-systemd-flag-556000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-556000"
	I0805 16:55:05.445756    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:55:05 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:55:05.448631    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:55:05 DEBUG: hyperkit: Pid is 6351
	I0805 16:55:05.449030    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 0
	I0805 16:55:05.449046    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:55:05.449131    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:55:05.450064    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 9e:3d:c2:d1:3:e in /var/db/dhcpd_leases ...
	I0805 16:55:05.450150    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:55:05.450173    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:55:05.450185    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:55:05.450196    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:55:05.450206    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:55:05.450219    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:55:05.450228    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:55:05.450256    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:55:05.450278    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:55:05.450291    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:55:05.450306    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:55:05.450319    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:55:05.450335    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:55:05.450347    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:55:05.450362    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:55:05.450376    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:55:05.450392    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:55:05.450417    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:55:05.456404    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:55:05 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:55:05.558336    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:55:05 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:55:05.559266    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:55:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:55:05.559283    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:55:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:55:05.559319    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:55:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:55:05.559341    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:55:05 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:55:05.936015    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:55:05 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:55:05.936032    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:55:05 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:55:06.050729    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:55:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:55:06.050752    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:55:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:55:06.050779    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:55:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:55:06.050796    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:55:06 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:55:06.051634    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:55:06 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:55:06.051644    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:55:06 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:55:07.451176    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 1
	I0805 16:55:07.451191    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:55:07.451255    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:55:07.452059    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 9e:3d:c2:d1:3:e in /var/db/dhcpd_leases ...
	I0805 16:55:07.452131    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:55:07.452146    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:55:07.452155    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:55:07.452165    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:55:07.452171    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:55:07.452179    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:55:07.452186    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:55:07.452195    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:55:07.452205    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:55:07.452213    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:55:07.452228    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:55:07.452238    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:55:07.452252    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:55:07.452266    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:55:07.452273    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:55:07.452282    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:55:07.452298    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:55:07.452310    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:55:09.452469    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 2
	I0805 16:55:09.452485    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:55:09.452576    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:55:09.453449    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 9e:3d:c2:d1:3:e in /var/db/dhcpd_leases ...
	I0805 16:55:09.453474    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:55:09.453518    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:55:09.453543    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:55:09.453558    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:55:09.453564    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:55:09.453571    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:55:09.453578    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:55:09.453588    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:55:09.453600    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:55:09.453617    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:55:09.453637    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:55:09.453650    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:55:09.453660    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:55:09.453674    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:55:09.453683    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:55:09.453691    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:55:09.453698    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:55:09.453708    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:55:11.423419    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:55:11 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0805 16:55:11.423552    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:55:11 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0805 16:55:11.423565    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:55:11 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0805 16:55:11.447670    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:55:11 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0805 16:55:11.454352    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 3
	I0805 16:55:11.454363    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:55:11.454440    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:55:11.455267    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 9e:3d:c2:d1:3:e in /var/db/dhcpd_leases ...
	I0805 16:55:11.455325    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:55:11.455333    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:55:11.455354    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:55:11.455362    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:55:11.455370    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:55:11.455383    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:55:11.455399    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:55:11.455415    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:55:11.455423    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:55:11.455431    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:55:11.455437    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:55:11.455460    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:55:11.455474    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:55:11.455496    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:55:11.455503    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:55:11.455510    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:55:11.455518    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:55:11.455537    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:55:13.456744    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 4
	I0805 16:55:13.456765    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:55:13.456845    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:55:13.457650    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 9e:3d:c2:d1:3:e in /var/db/dhcpd_leases ...
	I0805 16:55:13.457717    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:55:13.457735    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:55:13.457750    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:55:13.457777    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:55:13.457785    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:55:13.457794    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:55:13.457800    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:55:13.457809    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:55:13.457817    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:55:13.457835    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:55:13.457858    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:55:13.457867    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:55:13.457874    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:55:13.457882    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:55:13.457891    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:55:13.457898    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:55:13.457904    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:55:13.457912    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:55:15.459101    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 5
	I0805 16:55:15.459116    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:55:15.459179    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:55:15.459998    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 9e:3d:c2:d1:3:e in /var/db/dhcpd_leases ...
	I0805 16:55:15.460049    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:55:15.460057    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:55:15.460065    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:55:15.460072    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:55:15.460081    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:55:15.460091    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:55:15.460100    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:55:15.460109    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:55:15.460117    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:55:15.460127    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:55:15.460135    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:55:15.460163    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:55:15.460177    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:55:15.460185    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:55:15.460193    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:55:15.460204    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:55:15.460213    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:55:15.460223    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:55:17.462229    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 6
	I0805 16:55:17.462244    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:55:17.462372    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:55:17.463394    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 9e:3d:c2:d1:3:e in /var/db/dhcpd_leases ...
	I0805 16:55:17.463443    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:55:17.463453    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:55:17.463466    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:55:17.463476    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:55:17.463486    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:55:17.463495    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:55:17.463502    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:55:17.463508    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:55:17.463522    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:55:17.463536    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:55:17.463549    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:55:17.463558    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:55:17.463565    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:55:17.463573    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:55:17.463579    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:55:17.463587    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:55:17.463594    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:55:17.463602    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:55:19.465618    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 7
	I0805 16:55:19.465631    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:55:19.465686    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:55:19.466500    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 9e:3d:c2:d1:3:e in /var/db/dhcpd_leases ...
	I0805 16:55:19.466508    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:55:19.466517    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:55:19.466527    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:55:19.466545    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:55:19.466555    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:55:19.466575    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:55:19.466583    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:55:19.466593    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:55:19.466602    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:55:19.466611    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:55:19.466617    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:55:19.466623    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:55:19.466637    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:55:19.466651    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:55:19.466660    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:55:19.466665    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:55:19.466683    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:55:19.466696    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:55:21.466677    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 8
	I0805 16:55:21.466695    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:55:21.466762    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:55:21.467566    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 9e:3d:c2:d1:3:e in /var/db/dhcpd_leases ...
	I0805 16:55:21.467609    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:55:21.467623    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:55:21.467636    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:55:21.467655    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:55:21.467665    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:55:21.467673    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:55:21.467685    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:55:21.467693    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:55:21.467701    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:55:21.467710    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:55:21.467717    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:55:21.467724    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:55:21.467732    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:55:21.467745    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:55:21.467757    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:55:21.467773    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:55:21.467781    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:55:21.467791    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:55:23.468334    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 9
	I0805 16:55:23.468349    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:55:23.468417    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:55:23.469210    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 9e:3d:c2:d1:3:e in /var/db/dhcpd_leases ...
	I0805 16:55:23.469268    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:55:23.469281    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:55:23.469297    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:55:23.469307    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:55:23.469323    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:55:23.469331    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:55:23.469337    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:55:23.469346    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:55:23.469353    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:55:23.469361    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:55:23.469367    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:55:23.469374    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:55:23.469389    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:55:23.469400    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:55:23.469417    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:55:23.469431    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:55:23.469441    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:55:23.469448    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:55:25.471502    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 10
	I0805 16:55:25.471515    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:55:25.471569    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:55:25.472462    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 9e:3d:c2:d1:3:e in /var/db/dhcpd_leases ...
	I0805 16:55:25.472503    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:55:25.472513    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:55:25.472530    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:55:25.472541    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:55:25.472549    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:55:25.472556    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:55:25.472563    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:55:25.472569    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:55:25.472577    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:55:25.472594    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:55:25.472608    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:55:25.472616    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:55:25.472631    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:55:25.472653    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:55:25.472666    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:55:25.472674    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:55:25.472682    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:55:25.472691    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:55:27.473559    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 11
	I0805 16:55:27.473585    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:55:27.473726    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:55:27.474599    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 9e:3d:c2:d1:3:e in /var/db/dhcpd_leases ...
	I0805 16:55:27.474635    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:55:27.474643    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:55:27.474653    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:55:27.474660    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:55:27.474679    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:55:27.474691    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:55:27.474699    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:55:27.474706    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:55:27.474714    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:55:27.474723    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:55:27.474732    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:55:27.474739    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:55:27.474745    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:55:27.474752    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:55:27.474767    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:55:27.474777    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:55:27.474784    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:55:27.474792    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:55:29.476778    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 12
	I0805 16:55:29.476791    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:55:29.476927    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:55:29.477714    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 9e:3d:c2:d1:3:e in /var/db/dhcpd_leases ...
	I0805 16:55:29.477749    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:55:29.477761    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:55:29.477770    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:55:29.477780    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:55:29.477794    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:55:29.477808    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:55:29.477825    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:55:29.477834    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:55:29.477841    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:55:29.477849    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:55:29.477856    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:55:29.477864    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:55:29.477880    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:55:29.477891    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:55:29.477913    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:55:29.477946    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:55:29.477954    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:55:29.477961    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:55:31.479931    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 13
	I0805 16:55:31.479945    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:55:31.479997    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:55:31.480779    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 9e:3d:c2:d1:3:e in /var/db/dhcpd_leases ...
	I0805 16:55:31.480822    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:55:31.480833    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:55:31.480852    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:55:31.480864    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:55:31.480873    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:55:31.480883    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:55:31.480891    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:55:31.480899    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:55:31.480906    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:55:31.480916    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:55:31.480923    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:55:31.480934    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:55:31.480942    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:55:31.480951    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:55:31.480958    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:55:31.480966    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:55:31.480975    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:55:31.480984    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:55:33.481016    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 14
	I0805 16:55:33.481030    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:55:33.481162    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:55:33.481952    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 9e:3d:c2:d1:3:e in /var/db/dhcpd_leases ...
	I0805 16:55:33.482003    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:55:33.482013    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:55:33.482022    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:55:33.482027    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:55:33.482041    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:55:33.482056    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:55:33.482064    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:55:33.482072    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:55:33.482090    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:55:33.482104    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:55:33.482112    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:55:33.482120    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:55:33.482130    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:55:33.482138    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:55:33.482145    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:55:33.482153    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:55:33.482168    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:55:33.482180    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:55:35.482781    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 15
	I0805 16:55:35.482794    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:55:35.482858    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:55:35.483602    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 9e:3d:c2:d1:3:e in /var/db/dhcpd_leases ...
	I0805 16:55:35.483654    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:55:35.483662    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:55:35.483675    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:55:35.483689    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:55:35.483697    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:55:35.483711    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:55:35.483721    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:55:35.483735    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:55:35.483742    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:55:35.483750    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:55:35.483763    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:55:35.483772    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:55:35.483779    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:55:35.483787    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:55:35.483794    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:55:35.483802    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:55:35.483817    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:55:35.483824    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:55:37.485337    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 16
	I0805 16:55:37.485351    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:55:37.485502    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:55:37.486284    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 9e:3d:c2:d1:3:e in /var/db/dhcpd_leases ...
	I0805 16:55:37.486336    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:55:37.486348    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:55:37.486357    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:55:37.486364    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:55:37.486370    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:55:37.486378    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:55:37.486387    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:55:37.486394    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:55:37.486412    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:55:37.486425    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:55:37.486433    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:55:37.486442    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:55:37.486457    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:55:37.486469    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:55:37.486480    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:55:37.486490    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:55:37.486497    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:55:37.486506    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:55:39.487356    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 17
	I0805 16:55:39.487369    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:55:39.487378    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:55:39.488204    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 9e:3d:c2:d1:3:e in /var/db/dhcpd_leases ...
	I0805 16:55:39.488248    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:55:39.488259    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:55:39.488269    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:55:39.488278    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:55:39.488286    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:55:39.488292    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:55:39.488299    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:55:39.488330    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:55:39.488350    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:55:39.488363    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:55:39.488371    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:55:39.488380    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:55:39.488412    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:55:39.488431    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:55:39.488443    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:55:39.488451    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:55:39.488460    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:55:39.488469    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:55:41.490434    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 18
	I0805 16:55:41.490449    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:55:41.490511    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:55:41.491353    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 9e:3d:c2:d1:3:e in /var/db/dhcpd_leases ...
	I0805 16:55:41.491394    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:55:41.491407    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:55:41.491416    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:55:41.491424    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:55:41.491431    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:55:41.491438    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:55:41.491449    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:55:41.491472    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:55:41.491482    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:55:41.491494    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:55:41.491503    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:55:41.491511    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:55:41.491519    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:55:41.491528    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:55:41.491536    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:55:41.491557    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:55:41.491567    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:55:41.491584    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:55:43.492108    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 19
	I0805 16:55:43.492124    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:55:43.492180    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:55:43.493035    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 9e:3d:c2:d1:3:e in /var/db/dhcpd_leases ...
	I0805 16:55:43.493075    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:55:43.493094    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:55:43.493103    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:55:43.493120    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:55:43.493130    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:55:43.493149    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:55:43.493160    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:55:43.493173    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:55:43.493183    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:55:43.493191    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:55:43.493197    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:55:43.493210    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:55:43.493223    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:55:43.493247    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:55:43.493259    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:55:43.493267    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:55:43.493273    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:55:43.493286    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:55:45.495225    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 20
	I0805 16:55:45.495239    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:55:45.495346    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:55:45.496225    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 9e:3d:c2:d1:3:e in /var/db/dhcpd_leases ...
	I0805 16:55:45.496285    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:55:45.496295    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:55:45.496306    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:55:45.496312    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:55:45.496321    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:55:45.496330    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:55:45.496340    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:55:45.496350    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:55:45.496358    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:55:45.496366    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:55:45.496372    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:55:45.496384    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:55:45.496391    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:55:45.496399    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:55:45.496406    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:55:45.496412    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:55:45.496424    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:55:45.496438    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:55:47.497178    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 21
	I0805 16:55:47.497194    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:55:47.497247    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:55:47.498049    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 9e:3d:c2:d1:3:e in /var/db/dhcpd_leases ...
	I0805 16:55:47.498057    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:55:47.498067    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:55:47.498072    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:55:47.498079    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:55:47.498088    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:55:47.498095    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:55:47.498103    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:55:47.498116    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:55:47.498125    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:55:47.498133    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:55:47.498139    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:55:47.498146    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:55:47.498154    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:55:47.498161    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:55:47.498173    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:55:47.498189    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:55:47.498201    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:55:47.498212    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:55:49.500275    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 22
	I0805 16:55:49.500288    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:55:49.500362    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:55:49.501166    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 9e:3d:c2:d1:3:e in /var/db/dhcpd_leases ...
	I0805 16:55:49.501207    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:55:49.501231    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:55:49.501243    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:55:49.501251    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:55:49.501260    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:55:49.501267    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:55:49.501274    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:55:49.501282    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:55:49.501290    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:55:49.501313    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:55:49.501326    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:55:49.501341    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:55:49.501360    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:55:49.501372    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:55:49.501385    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:55:49.501394    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:55:49.501403    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:55:49.501410    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:55:51.502609    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 23
	I0805 16:55:51.502621    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:55:51.502726    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:55:51.503529    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 9e:3d:c2:d1:3:e in /var/db/dhcpd_leases ...
	I0805 16:55:51.503576    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:55:51.503585    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:55:51.503606    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:55:51.503618    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:55:51.503625    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:55:51.503635    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:55:51.503643    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:55:51.503651    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:55:51.503657    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:55:51.503665    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:55:51.503686    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:55:51.503698    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:55:51.503705    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:55:51.503713    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:55:51.503720    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:55:51.503726    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:55:51.503740    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:55:51.503754    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:55:53.504012    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 24
	I0805 16:55:53.504028    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:55:53.504150    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:55:53.504959    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 9e:3d:c2:d1:3:e in /var/db/dhcpd_leases ...
	I0805 16:55:53.504991    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:55:53.505004    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:55:53.505015    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:55:53.505026    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:55:53.505034    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:55:53.505049    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:55:53.505066    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:55:53.505075    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:55:53.505085    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:55:53.505094    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:55:53.505103    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:55:53.505110    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:55:53.505118    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:55:53.505125    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:55:53.505151    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:55:53.505166    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:55:53.505175    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:55:53.505183    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:55:55.507224    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 25
	I0805 16:55:55.507239    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:55:55.507323    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:55:55.508383    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 9e:3d:c2:d1:3:e in /var/db/dhcpd_leases ...
	I0805 16:55:55.508418    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:55:55.508428    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:55:55.508441    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:55:55.508455    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:55:55.508474    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:55:55.508480    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:55:55.508497    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:55:55.508505    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:55:55.508512    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:55:55.508521    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:55:55.508529    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:55:55.508540    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:55:55.508547    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:55:55.508552    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:55:55.508558    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:55:55.508564    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:55:55.508569    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:55:55.508577    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:55:57.509534    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 26
	I0805 16:55:57.509549    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:55:57.509639    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:55:57.510418    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 9e:3d:c2:d1:3:e in /var/db/dhcpd_leases ...
	I0805 16:55:57.510464    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:55:57.510481    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:55:57.510493    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:55:57.510503    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:55:57.510512    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:55:57.510519    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:55:57.510525    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:55:57.510542    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:55:57.510549    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:55:57.510557    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:55:57.510563    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:55:57.510570    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:55:57.510578    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:55:57.510585    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:55:57.510593    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:55:57.510600    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:55:57.510605    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:55:57.510612    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:55:59.511690    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 27
	I0805 16:55:59.511707    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:55:59.511809    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:55:59.512651    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 9e:3d:c2:d1:3:e in /var/db/dhcpd_leases ...
	I0805 16:55:59.512699    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:55:59.512712    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:55:59.512719    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:55:59.512725    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:55:59.512732    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:55:59.512752    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:55:59.512778    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:55:59.512791    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:55:59.512801    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:55:59.512808    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:55:59.512815    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:55:59.512824    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:55:59.512831    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:55:59.512837    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:55:59.512843    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:55:59.512849    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:55:59.512857    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:55:59.512873    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:56:01.514910    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 28
	I0805 16:56:01.514926    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:01.515043    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:56:01.515818    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 9e:3d:c2:d1:3:e in /var/db/dhcpd_leases ...
	I0805 16:56:01.515874    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:56:01.515894    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:56:01.515903    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:56:01.515914    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:56:01.515931    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:56:01.515943    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:56:01.515951    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:56:01.515958    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:56:01.515965    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:56:01.515977    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:56:01.515990    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:56:01.515998    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:56:01.516007    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:56:01.516014    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:56:01.516021    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:56:01.516029    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:56:01.516037    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:56:01.516051    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:56:03.518082    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 29
	I0805 16:56:03.518096    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:03.518145    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:56:03.518983    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 9e:3d:c2:d1:3:e in /var/db/dhcpd_leases ...
	I0805 16:56:03.519027    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:56:03.519040    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:56:03.519071    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:56:03.519082    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:56:03.519091    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:56:03.519097    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:56:03.519109    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:56:03.519121    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:56:03.519129    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:56:03.519146    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:56:03.519154    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:56:03.519162    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:56:03.519169    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:56:03.519175    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:56:03.519187    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:56:03.519205    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:56:03.519218    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:56:03.519227    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:56:05.521237    6313 client.go:171] duration metric: took 1m1.242175254s to LocalClient.Create
	I0805 16:56:07.523423    6313 start.go:128] duration metric: took 1m3.277029252s to createHost
	I0805 16:56:07.523459    6313 start.go:83] releasing machines lock for "force-systemd-flag-556000", held for 1m3.277180936s
	W0805 16:56:07.523476    6313 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 9e:3d:c2:d1:3:e
	I0805 16:56:07.523803    6313 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:56:07.523830    6313 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:56:07.532547    6313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53707
	I0805 16:56:07.532971    6313 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:56:07.533340    6313 main.go:141] libmachine: Using API Version  1
	I0805 16:56:07.533362    6313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:56:07.533621    6313 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:56:07.533988    6313 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:56:07.534007    6313 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:56:07.542967    6313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53709
	I0805 16:56:07.543298    6313 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:56:07.543825    6313 main.go:141] libmachine: Using API Version  1
	I0805 16:56:07.543877    6313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:56:07.544106    6313 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:56:07.544239    6313 main.go:141] libmachine: (force-systemd-flag-556000) Calling .GetState
	I0805 16:56:07.544326    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:07.544404    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:56:07.545418    6313 main.go:141] libmachine: (force-systemd-flag-556000) Calling .DriverName
	I0805 16:56:07.566713    6313 out.go:177] * Deleting "force-systemd-flag-556000" in hyperkit ...
	I0805 16:56:07.608745    6313 main.go:141] libmachine: (force-systemd-flag-556000) Calling .Remove
	I0805 16:56:07.608878    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:07.608887    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:07.608960    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:56:07.609903    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:07.609960    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | waiting for graceful shutdown
	I0805 16:56:08.612089    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:08.612236    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:56:08.613155    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | waiting for graceful shutdown
	I0805 16:56:09.613373    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:09.613470    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:56:09.615161    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | waiting for graceful shutdown
	I0805 16:56:10.615274    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:10.615374    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:56:10.615981    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | waiting for graceful shutdown
	I0805 16:56:11.616852    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:11.616929    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:56:11.617714    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | waiting for graceful shutdown
	I0805 16:56:12.618440    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:56:12.618484    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6351
	I0805 16:56:12.619562    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | sending sigkill
	I0805 16:56:12.619572    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	W0805 16:56:12.629823    6313 out.go:239] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 9e:3d:c2:d1:3:e
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 9e:3d:c2:d1:3:e
	I0805 16:56:12.629841    6313 start.go:729] Will try again in 5 seconds ...
	I0805 16:56:12.640294    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:56:12 WARN : hyperkit: failed to read stderr: EOF
	I0805 16:56:12.640314    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:56:12 WARN : hyperkit: failed to read stdout: EOF
	I0805 16:56:17.631148    6313 start.go:360] acquireMachinesLock for force-systemd-flag-556000: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:57:10.401144    6313 start.go:364] duration metric: took 52.769770957s to acquireMachinesLock for "force-systemd-flag-556000"
	I0805 16:57:10.401166    6313 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-556000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-flag-556000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:57:10.401240    6313 start.go:125] createHost starting for "" (driver="hyperkit")
	I0805 16:57:10.422568    6313 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0805 16:57:10.422658    6313 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:57:10.422675    6313 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:57:10.431040    6313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53717
	I0805 16:57:10.431379    6313 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:57:10.431718    6313 main.go:141] libmachine: Using API Version  1
	I0805 16:57:10.431733    6313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:57:10.431940    6313 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:57:10.432060    6313 main.go:141] libmachine: (force-systemd-flag-556000) Calling .GetMachineName
	I0805 16:57:10.432154    6313 main.go:141] libmachine: (force-systemd-flag-556000) Calling .DriverName
	I0805 16:57:10.432292    6313 start.go:159] libmachine.API.Create for "force-systemd-flag-556000" (driver="hyperkit")
	I0805 16:57:10.432318    6313 client.go:168] LocalClient.Create starting
	I0805 16:57:10.432350    6313 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem
	I0805 16:57:10.432405    6313 main.go:141] libmachine: Decoding PEM data...
	I0805 16:57:10.432418    6313 main.go:141] libmachine: Parsing certificate...
	I0805 16:57:10.432459    6313 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem
	I0805 16:57:10.432501    6313 main.go:141] libmachine: Decoding PEM data...
	I0805 16:57:10.432511    6313 main.go:141] libmachine: Parsing certificate...
	I0805 16:57:10.432524    6313 main.go:141] libmachine: Running pre-create checks...
	I0805 16:57:10.432530    6313 main.go:141] libmachine: (force-systemd-flag-556000) Calling .PreCreateCheck
	I0805 16:57:10.432615    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:10.432648    6313 main.go:141] libmachine: (force-systemd-flag-556000) Calling .GetConfigRaw
	I0805 16:57:10.443584    6313 main.go:141] libmachine: Creating machine...
	I0805 16:57:10.443593    6313 main.go:141] libmachine: (force-systemd-flag-556000) Calling .Create
	I0805 16:57:10.443684    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:10.443812    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | I0805 16:57:10.443681    6400 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:57:10.443863    6313 main.go:141] libmachine: (force-systemd-flag-556000) Downloading /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1122/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 16:57:10.709106    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | I0805 16:57:10.709038    6400 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/id_rsa...
	I0805 16:57:10.830081    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | I0805 16:57:10.830008    6400 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/force-systemd-flag-556000.rawdisk...
	I0805 16:57:10.830093    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Writing magic tar header
	I0805 16:57:10.830104    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Writing SSH key tar header
	I0805 16:57:10.830646    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | I0805 16:57:10.830610    6400 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000 ...
	I0805 16:57:11.206315    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:11.206332    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/hyperkit.pid
	I0805 16:57:11.206347    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Using UUID ecc34ce9-0ab4-45e2-bceb-14d89cc9e298
	I0805 16:57:11.232629    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Generated MAC 22:61:9d:25:14:d1
	I0805 16:57:11.232649    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-556000
	I0805 16:57:11.232698    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:57:11 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ecc34ce9-0ab4-45e2-bceb-14d89cc9e298", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:57:11.232729    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:57:11 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ecc34ce9-0ab4-45e2-bceb-14d89cc9e298", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:
[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:57:11.232768    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:57:11 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ecc34ce9-0ab4-45e2-bceb-14d89cc9e298", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/force-systemd-flag-556000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/fo
rce-systemd-flag-556000/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-556000"}
	I0805 16:57:11.232802    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:57:11 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ecc34ce9-0ab4-45e2-bceb-14d89cc9e298 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/force-systemd-flag-556000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/bzimage,/Users/jenkins/minikube-integr
ation/19373-1122/.minikube/machines/force-systemd-flag-556000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-flag-556000"
	I0805 16:57:11.232809    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:57:11 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:57:11.235696    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:57:11 DEBUG: hyperkit: Pid is 6401
	I0805 16:57:11.236124    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 0
	I0805 16:57:11.236141    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:11.236206    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6401
	I0805 16:57:11.237098    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 22:61:9d:25:14:d1 in /var/db/dhcpd_leases ...
	I0805 16:57:11.237170    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:57:11.237190    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:57:11.237213    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:57:11.237228    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:57:11.237247    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:57:11.237265    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:57:11.237277    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:57:11.237285    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:57:11.237299    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:57:11.237316    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:57:11.237328    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:57:11.237361    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:57:11.237386    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:57:11.237399    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:57:11.237406    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:57:11.237416    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:57:11.237424    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:57:11.237447    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:57:11.243260    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:57:11 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:57:11.251410    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:57:11 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-flag-556000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:57:11.252355    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:57:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:57:11.252386    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:57:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:57:11.252411    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:57:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:57:11.252428    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:57:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:57:11.630331    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:57:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:57:11.630356    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:57:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:57:11.744911    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:57:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:57:11.744929    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:57:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:57:11.744970    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:57:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:57:11.744992    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:57:11 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:57:11.745826    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:57:11 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:57:11.745848    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:57:11 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:57:13.239132    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 1
	I0805 16:57:13.239148    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:13.239191    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6401
	I0805 16:57:13.240056    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 22:61:9d:25:14:d1 in /var/db/dhcpd_leases ...
	I0805 16:57:13.240101    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:57:13.240110    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:57:13.240121    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:57:13.240131    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:57:13.240139    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:57:13.240154    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:57:13.240188    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:57:13.240206    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:57:13.240219    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:57:13.240227    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:57:13.240235    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:57:13.240242    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:57:13.240250    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:57:13.240261    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:57:13.240270    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:57:13.240279    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:57:13.240290    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:57:13.240301    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:57:15.241852    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 2
	I0805 16:57:15.241867    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:15.241925    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6401
	I0805 16:57:15.242770    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 22:61:9d:25:14:d1 in /var/db/dhcpd_leases ...
	I0805 16:57:15.242783    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:57:15.242789    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:57:15.242796    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:57:15.242801    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:57:15.242811    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:57:15.242817    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:57:15.242849    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:57:15.242866    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:57:15.242879    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:57:15.242890    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:57:15.242898    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:57:15.242906    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:57:15.242914    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:57:15.242922    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:57:15.242932    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:57:15.242940    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:57:15.242946    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:57:15.242952    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:57:17.136079    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:57:17 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:57:17.136189    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:57:17 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:57:17.136199    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:57:17 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:57:17.156042    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | 2024/08/05 16:57:17 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:57:17.244630    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 3
	I0805 16:57:17.244666    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:17.244884    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6401
	I0805 16:57:17.246384    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 22:61:9d:25:14:d1 in /var/db/dhcpd_leases ...
	I0805 16:57:17.246529    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:57:17.246568    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:57:17.246585    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:57:17.246595    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:57:17.246606    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:57:17.246618    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:57:17.246638    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:57:17.246662    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:57:17.246679    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:57:17.246690    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:57:17.246701    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:57:17.246713    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:57:17.246727    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:57:17.246742    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:57:17.246769    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:57:17.246788    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:57:17.246799    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:57:17.246811    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:57:19.247411    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 4
	I0805 16:57:19.247428    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:19.247517    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6401
	I0805 16:57:19.248311    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 22:61:9d:25:14:d1 in /var/db/dhcpd_leases ...
	I0805 16:57:19.248365    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:57:19.248375    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:57:19.248403    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:57:19.248430    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:57:19.248438    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:57:19.248447    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:57:19.248455    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:57:19.248463    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:57:19.248472    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:57:19.248488    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:57:19.248503    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:57:19.248513    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:57:19.248521    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:57:19.248529    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:57:19.248538    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:57:19.248553    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:57:19.248560    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:57:19.248568    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:57:21.250027    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 5
	I0805 16:57:21.250041    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:21.250182    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6401
	I0805 16:57:21.251040    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 22:61:9d:25:14:d1 in /var/db/dhcpd_leases ...
	I0805 16:57:21.251088    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:57:21.251102    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:57:21.251130    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:57:21.251143    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:57:21.251152    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:57:21.251159    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:57:21.251165    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:57:21.251172    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:57:21.251178    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:57:21.251185    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:57:21.251198    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:57:21.251207    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:57:21.251214    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:57:21.251221    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:57:21.251227    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:57:21.251235    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:57:21.251241    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:57:21.251247    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:57:23.253064    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 6
	I0805 16:57:23.253081    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:23.253215    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6401
	I0805 16:57:23.254034    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 22:61:9d:25:14:d1 in /var/db/dhcpd_leases ...
	I0805 16:57:23.254045    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:57:23.254054    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:57:23.254061    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:57:23.254074    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:57:23.254082    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:57:23.254088    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:57:23.254096    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:57:23.254106    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:57:23.254113    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:57:23.254121    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:57:23.254128    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:57:23.254135    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:57:23.254152    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:57:23.254165    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:57:23.254172    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:57:23.254181    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:57:23.254191    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:57:23.254200    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:57:25.255725    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 7
	I0805 16:57:25.255740    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:25.255829    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6401
	I0805 16:57:25.256735    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 22:61:9d:25:14:d1 in /var/db/dhcpd_leases ...
	I0805 16:57:25.256776    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:57:25.256790    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:57:25.256804    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:57:25.256818    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:57:25.256829    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:57:25.256837    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:57:25.256853    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:57:25.256880    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:57:25.256899    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:57:25.256911    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:57:25.256919    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:57:25.256926    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:57:25.256933    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:57:25.256941    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:57:25.256948    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:57:25.256954    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:57:25.256960    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:57:25.256969    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:57:27.257221    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 8
	I0805 16:57:27.257235    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:27.257302    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6401
	I0805 16:57:27.258149    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 22:61:9d:25:14:d1 in /var/db/dhcpd_leases ...
	I0805 16:57:27.258193    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:57:27.258206    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:57:27.258219    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:57:27.258226    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:57:27.258235    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:57:27.258243    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:57:27.258250    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:57:27.258256    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:57:27.258271    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:57:27.258279    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:57:27.258286    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:57:27.258295    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:57:27.258301    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:57:27.258310    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:57:27.258329    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:57:27.258341    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:57:27.258352    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:57:27.258361    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:57:29.260362    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 9
	I0805 16:57:29.260380    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:29.260716    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6401
	I0805 16:57:29.261548    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 22:61:9d:25:14:d1 in /var/db/dhcpd_leases ...
	I0805 16:57:29.261556    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:57:29.261575    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:57:29.261586    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:57:29.261596    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:57:29.261605    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:57:29.261628    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:57:29.261639    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:57:29.261646    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:57:29.261654    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:57:29.261661    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:57:29.261667    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:57:29.261674    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:57:29.261687    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:57:29.261696    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:57:29.261702    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:57:29.261710    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:57:29.261725    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:57:29.261739    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:57:31.262053    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 10
	I0805 16:57:31.262069    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:31.262140    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6401
	I0805 16:57:31.262910    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 22:61:9d:25:14:d1 in /var/db/dhcpd_leases ...
	I0805 16:57:31.262983    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:57:31.262996    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:57:31.263007    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:57:31.263014    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:57:31.263022    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:57:31.263032    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:57:31.263041    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:57:31.263050    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:57:31.263059    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:57:31.263066    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:57:31.263088    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:57:31.263102    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:57:31.263112    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:57:31.263120    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:57:31.263126    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:57:31.263143    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:57:31.263169    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:57:31.263179    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:57:33.263399    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 11
	I0805 16:57:33.263415    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:33.263544    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6401
	I0805 16:57:33.264345    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 22:61:9d:25:14:d1 in /var/db/dhcpd_leases ...
	I0805 16:57:33.264396    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:57:33.264414    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:57:33.264426    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:57:33.264433    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:57:33.264439    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:57:33.264447    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:57:33.264455    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:57:33.264470    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:57:33.264479    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:57:33.264492    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:57:33.264507    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:57:33.264516    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:57:33.264524    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:57:33.264532    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:57:33.264541    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:57:33.264561    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:57:33.264575    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:57:33.264585    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:57:35.265295    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 12
	I0805 16:57:35.265311    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:35.265352    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6401
	I0805 16:57:35.266195    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 22:61:9d:25:14:d1 in /var/db/dhcpd_leases ...
	I0805 16:57:35.266243    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:57:35.266253    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:57:35.266263    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:57:35.266271    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:57:35.266280    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:57:35.266289    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:57:35.266296    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:57:35.266302    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:57:35.266309    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:57:35.266316    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:57:35.266326    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:57:35.266335    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:57:35.266347    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:57:35.266358    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:57:35.266370    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:57:35.266381    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:57:35.266392    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:57:35.266400    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:57:37.266782    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 13
	I0805 16:57:37.266798    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:37.266866    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6401
	I0805 16:57:37.267642    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 22:61:9d:25:14:d1 in /var/db/dhcpd_leases ...
	I0805 16:57:37.267692    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:57:37.267701    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:57:37.267714    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:57:37.267721    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:57:37.267734    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:57:37.267750    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:57:37.267757    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:57:37.267766    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:57:37.267782    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:57:37.267796    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:57:37.267806    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:57:37.267814    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:57:37.267822    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:57:37.267830    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:57:37.267849    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:57:37.267863    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:57:37.267871    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:57:37.267879    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:57:39.268573    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 14
	I0805 16:57:39.268588    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:39.268654    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6401
	I0805 16:57:39.269453    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 22:61:9d:25:14:d1 in /var/db/dhcpd_leases ...
	I0805 16:57:39.269493    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:57:39.269506    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:57:39.269520    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:57:39.269527    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:57:39.269538    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:57:39.269546    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:57:39.269553    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:57:39.269559    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:57:39.269567    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:57:39.269575    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:57:39.269586    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:57:39.269593    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:57:39.269604    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:57:39.269614    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:57:39.269621    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:57:39.269628    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:57:39.269636    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:57:39.269652    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:57:41.271656    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 15
	I0805 16:57:41.271671    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:41.271813    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6401
	I0805 16:57:41.272613    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 22:61:9d:25:14:d1 in /var/db/dhcpd_leases ...
	I0805 16:57:41.272681    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:57:41.272699    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:57:41.272710    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:57:41.272717    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:57:41.272725    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:57:41.272733    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:57:41.272748    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:57:41.272761    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:57:41.272770    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:57:41.272778    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:57:41.272786    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:57:41.272792    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:57:41.272799    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:57:41.272808    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:57:41.272815    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:57:41.272823    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:57:41.272830    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:57:41.272838    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:57:43.272862    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 16
	I0805 16:57:43.272874    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:43.272974    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6401
	I0805 16:57:43.273786    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 22:61:9d:25:14:d1 in /var/db/dhcpd_leases ...
	I0805 16:57:43.273822    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:57:43.273829    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:57:43.273856    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:57:43.273876    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:57:43.273886    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:57:43.273892    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:57:43.273900    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:57:43.273907    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:57:43.273919    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:57:43.273931    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:57:43.273948    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:57:43.273962    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:57:43.273971    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:57:43.273985    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:57:43.274000    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:57:43.274015    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:57:43.274023    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:57:43.274038    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:57:45.274398    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 17
	I0805 16:57:45.274414    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:45.274495    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6401
	I0805 16:57:45.275542    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 22:61:9d:25:14:d1 in /var/db/dhcpd_leases ...
	I0805 16:57:45.275587    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:57:45.275602    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:57:45.275616    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:57:45.275626    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:57:45.275640    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:57:45.275647    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:57:45.275653    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:57:45.275661    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:57:45.275667    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:57:45.275675    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:57:45.275688    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:57:45.275697    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:57:45.275704    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:57:45.275710    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:57:45.275728    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:57:45.275742    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:57:45.275752    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:57:45.275762    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:57:47.276288    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 18
	I0805 16:57:47.276304    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:47.276361    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6401
	I0805 16:57:47.277162    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 22:61:9d:25:14:d1 in /var/db/dhcpd_leases ...
	I0805 16:57:47.277201    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:57:47.277211    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:57:47.277229    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:57:47.277238    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:57:47.277246    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:57:47.277252    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:57:47.277259    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:57:47.277267    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:57:47.277281    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:57:47.277301    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:57:47.277310    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:57:47.277323    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:57:47.277342    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:57:47.277355    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:57:47.277363    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:57:47.277370    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:57:47.277377    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:57:47.277383    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:57:49.277891    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 19
	I0805 16:57:49.277904    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:49.278015    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6401
	I0805 16:57:49.278871    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 22:61:9d:25:14:d1 in /var/db/dhcpd_leases ...
	I0805 16:57:49.278922    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:57:49.278936    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:57:49.278943    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:57:49.278954    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:57:49.278968    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:57:49.278975    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:57:49.278983    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:57:49.278991    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:57:49.278997    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:57:49.279005    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:57:49.279023    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:57:49.279037    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:57:49.279053    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:57:49.279065    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:57:49.279074    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:57:49.279082    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:57:49.279095    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:57:49.279104    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:57:51.281132    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 20
	I0805 16:57:51.281149    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:51.281181    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6401
	I0805 16:57:51.281970    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 22:61:9d:25:14:d1 in /var/db/dhcpd_leases ...
	I0805 16:57:51.282028    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:57:51.282040    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:57:51.282057    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:57:51.282069    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:57:51.282077    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:57:51.282086    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:57:51.282103    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:57:51.282114    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:57:51.282131    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:57:51.282144    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:57:51.282152    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:57:51.282164    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:57:51.282175    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:57:51.282185    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:57:51.282202    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:57:51.282210    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:57:51.282218    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:57:51.282227    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:57:53.282651    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 21
	I0805 16:57:53.282664    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:53.282744    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6401
	I0805 16:57:53.283538    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 22:61:9d:25:14:d1 in /var/db/dhcpd_leases ...
	I0805 16:57:53.283565    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:57:53.283580    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:57:53.283597    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:57:53.283607    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:57:53.283620    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:57:53.283628    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:57:53.283658    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:57:53.283668    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:57:53.283683    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:57:53.283694    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:57:53.283703    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:57:53.283711    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:57:53.283725    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:57:53.283737    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:57:53.283744    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:57:53.283751    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:57:53.283757    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:57:53.283764    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:57:55.289153    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 22
	I0805 16:57:55.289176    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:55.289245    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6401
	I0805 16:57:55.290025    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 22:61:9d:25:14:d1 in /var/db/dhcpd_leases ...
	I0805 16:57:55.290068    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:57:55.290082    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:57:55.290099    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:57:55.290116    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:57:55.290130    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:57:55.290145    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:57:55.290154    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:57:55.290162    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:57:55.290172    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:57:55.290181    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:57:55.290188    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:57:55.290196    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:57:55.290203    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:57:55.290212    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:57:55.290219    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:57:55.290225    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:57:55.290239    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:57:55.290252    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:57:57.295007    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 23
	I0805 16:57:57.295023    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:57.295086    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6401
	I0805 16:57:57.295908    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 22:61:9d:25:14:d1 in /var/db/dhcpd_leases ...
	I0805 16:57:57.295967    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:57:57.295976    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:57:57.295985    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:57:57.295993    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:57:57.296001    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:57:57.296008    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:57:57.296015    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:57:57.296021    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:57:57.296029    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:57:57.296038    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:57:57.296045    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:57:57.296054    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:57:57.296061    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:57:57.296071    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:57:57.296077    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:57:57.296086    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:57:57.296096    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:57:57.296107    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:57:59.300786    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 24
	I0805 16:57:59.300798    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:57:59.300869    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6401
	I0805 16:57:59.301676    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 22:61:9d:25:14:d1 in /var/db/dhcpd_leases ...
	I0805 16:57:59.301772    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:57:59.301784    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:57:59.301794    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:57:59.301800    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:57:59.301806    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:57:59.301816    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:57:59.301825    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:57:59.301831    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:57:59.301838    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:57:59.301843    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:57:59.301850    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:57:59.301857    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:57:59.301866    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:57:59.301885    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:57:59.301896    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:57:59.301917    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:57:59.301931    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:57:59.301941    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:58:01.306861    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 25
	I0805 16:58:01.306877    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:58:01.306921    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6401
	I0805 16:58:01.307972    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 22:61:9d:25:14:d1 in /var/db/dhcpd_leases ...
	I0805 16:58:01.308020    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:58:01.308030    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:58:01.308039    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:58:01.308045    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:58:01.308060    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:58:01.308066    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:58:01.308073    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:58:01.308086    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:58:01.308093    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:58:01.308099    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:58:01.308106    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:58:01.308122    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:58:01.308135    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:58:01.308146    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:58:01.308154    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:58:01.308163    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:58:01.308169    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:58:01.308177    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:58:03.311318    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 26
	I0805 16:58:03.311333    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:58:03.311418    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6401
	I0805 16:58:03.312221    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 22:61:9d:25:14:d1 in /var/db/dhcpd_leases ...
	I0805 16:58:03.312271    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:58:03.312281    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:58:03.312293    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:58:03.312300    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:58:03.312307    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:58:03.312314    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:58:03.312329    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:58:03.312337    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:58:03.312345    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:58:03.312362    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:58:03.312379    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:58:03.312392    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:58:03.312400    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:58:03.312409    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:58:03.312422    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:58:03.312431    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:58:03.312438    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:58:03.312446    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:58:05.315193    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 27
	I0805 16:58:05.315207    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:58:05.315264    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6401
	I0805 16:58:05.316044    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 22:61:9d:25:14:d1 in /var/db/dhcpd_leases ...
	I0805 16:58:05.316090    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:58:05.316098    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:58:05.316107    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:58:05.316122    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:58:05.316141    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:58:05.316155    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:58:05.316164    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:58:05.316172    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:58:05.316187    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:58:05.316199    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:58:05.316209    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:58:05.316217    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:58:05.316224    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:58:05.316233    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:58:05.316240    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:58:05.316253    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:58:05.316270    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:58:05.316283    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:58:07.318722    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 28
	I0805 16:58:07.319179    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:58:07.319198    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6401
	I0805 16:58:07.319663    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 22:61:9d:25:14:d1 in /var/db/dhcpd_leases ...
	I0805 16:58:07.319736    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:58:07.319750    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:58:07.319844    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:58:07.319882    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:58:07.319897    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:58:07.319909    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:58:07.319921    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:58:07.319935    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:58:07.319945    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:58:07.319958    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:58:07.319972    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:58:07.319983    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:58:07.319998    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:58:07.320007    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:58:07.320069    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:58:07.320089    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:58:07.320098    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:58:07.320111    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:58:09.324004    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Attempt 29
	I0805 16:58:09.324017    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:58:09.324075    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | hyperkit pid from json: 6401
	I0805 16:58:09.324868    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Searching for 22:61:9d:25:14:d1 in /var/db/dhcpd_leases ...
	I0805 16:58:09.324929    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:58:09.324946    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:58:09.324988    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:58:09.325007    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:58:09.325016    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:58:09.325026    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:58:09.325034    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:58:09.325041    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:58:09.325048    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:58:09.325054    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:58:09.325061    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:58:09.325068    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:58:09.325082    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:58:09.325089    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:58:09.325097    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:58:09.325105    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:58:09.325113    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:58:09.325123    6313 main.go:141] libmachine: (force-systemd-flag-556000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:58:11.328982    6313 client.go:171] duration metric: took 1m0.867150065s to LocalClient.Create
	I0805 16:58:13.331363    6313 start.go:128] duration metric: took 1m2.899008795s to createHost
	I0805 16:58:13.331433    6313 start.go:83] releasing machines lock for "force-systemd-flag-556000", held for 1m2.899171424s
	W0805 16:58:13.331521    6313 out.go:239] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-556000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 22:61:9d:25:14:d1
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-flag-556000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 22:61:9d:25:14:d1
	I0805 16:58:13.394818    6313 out.go:177] 
	W0805 16:58:13.415785    6313 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 22:61:9d:25:14:d1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 22:61:9d:25:14:d1
	W0805 16:58:13.415805    6313 out.go:239] * 
	* 
	W0805 16:58:13.416414    6313 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:58:13.478757    6313 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-flag-556000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-556000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-flag-556000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (178.0976ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-flag-556000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-flag-556000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-08-05 16:58:13.766067 -0700 PDT m=+4273.306610050
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-556000 -n force-systemd-flag-556000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-flag-556000 -n force-systemd-flag-556000: exit status 7 (83.724831ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 16:58:13.847487    6423 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0805 16:58:13.847512    6423 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-556000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-flag-556000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-556000
E0805 16:58:19.174222    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/skaffold-862000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-556000: (5.331266442s)
--- FAIL: TestForceSystemdFlag (252.26s)

                                                
                                    
x
+
TestForceSystemdEnv (233.95s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-870000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
E0805 16:51:19.250533    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0805 16:51:50.654023    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p force-systemd-env-870000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : exit status 80 (3m48.362806338s)

                                                
                                                
-- stdout --
	* [force-systemd-env-870000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the hyperkit driver based on user configuration
	* Starting "force-systemd-env-870000" primary control-plane node in "force-systemd-env-870000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "force-systemd-env-870000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:51:16.125356    6251 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:51:16.126027    6251 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:51:16.126036    6251 out.go:304] Setting ErrFile to fd 2...
	I0805 16:51:16.126041    6251 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:51:16.126351    6251 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:51:16.128199    6251 out.go:298] Setting JSON to false
	I0805 16:51:16.150653    6251 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":4847,"bootTime":1722897029,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0805 16:51:16.150750    6251 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:51:16.172446    6251 out.go:177] * [force-systemd-env-870000] minikube v1.33.1 on Darwin 14.5
	I0805 16:51:16.214054    6251 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:51:16.214101    6251 notify.go:220] Checking for updates...
	I0805 16:51:16.255941    6251 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:51:16.277039    6251 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0805 16:51:16.298001    6251 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:51:16.318965    6251 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:51:16.340047    6251 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0805 16:51:16.361483    6251 config.go:182] Loaded profile config "offline-docker-642000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:51:16.361562    6251 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:51:16.389939    6251 out.go:177] * Using the hyperkit driver based on user configuration
	I0805 16:51:16.432011    6251 start.go:297] selected driver: hyperkit
	I0805 16:51:16.432022    6251 start.go:901] validating driver "hyperkit" against <nil>
	I0805 16:51:16.432033    6251 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:51:16.434908    6251 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:51:16.435023    6251 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19373-1122/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0805 16:51:16.443360    6251 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0805 16:51:16.447203    6251 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:51:16.447239    6251 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0805 16:51:16.447272    6251 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 16:51:16.447483    6251 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 16:51:16.447527    6251 cni.go:84] Creating CNI manager for ""
	I0805 16:51:16.447545    6251 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 16:51:16.447552    6251 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 16:51:16.447629    6251 start.go:340] cluster config:
	{Name:force-systemd-env-870000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:51:16.447713    6251 iso.go:125] acquiring lock: {Name:mk71e8d40232ece83c91dc82184f03ab93aee56e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:51:16.469024    6251 out.go:177] * Starting "force-systemd-env-870000" primary control-plane node in "force-systemd-env-870000" cluster
	I0805 16:51:16.490017    6251 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:51:16.490041    6251 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0805 16:51:16.490051    6251 cache.go:56] Caching tarball of preloaded images
	I0805 16:51:16.490148    6251 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:51:16.490156    6251 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:51:16.490226    6251 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/force-systemd-env-870000/config.json ...
	I0805 16:51:16.490242    6251 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/force-systemd-env-870000/config.json: {Name:mk7e288cab05cc5c6158e01779629a5a07378bc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:51:16.490570    6251 start.go:360] acquireMachinesLock for force-systemd-env-870000: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:51:55.582780    6251 start.go:364] duration metric: took 39.092047041s to acquireMachinesLock for "force-systemd-env-870000"
	I0805 16:51:55.582820    6251 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-870000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:51:55.582876    6251 start.go:125] createHost starting for "" (driver="hyperkit")
	I0805 16:51:55.606020    6251 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0805 16:51:55.606163    6251 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:51:55.606207    6251 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:51:55.614970    6251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53671
	I0805 16:51:55.615493    6251 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:51:55.616080    6251 main.go:141] libmachine: Using API Version  1
	I0805 16:51:55.616090    6251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:51:55.616355    6251 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:51:55.616475    6251 main.go:141] libmachine: (force-systemd-env-870000) Calling .GetMachineName
	I0805 16:51:55.616626    6251 main.go:141] libmachine: (force-systemd-env-870000) Calling .DriverName
	I0805 16:51:55.616778    6251 start.go:159] libmachine.API.Create for "force-systemd-env-870000" (driver="hyperkit")
	I0805 16:51:55.616801    6251 client.go:168] LocalClient.Create starting
	I0805 16:51:55.616835    6251 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem
	I0805 16:51:55.616888    6251 main.go:141] libmachine: Decoding PEM data...
	I0805 16:51:55.616905    6251 main.go:141] libmachine: Parsing certificate...
	I0805 16:51:55.616967    6251 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem
	I0805 16:51:55.617006    6251 main.go:141] libmachine: Decoding PEM data...
	I0805 16:51:55.617016    6251 main.go:141] libmachine: Parsing certificate...
	I0805 16:51:55.617027    6251 main.go:141] libmachine: Running pre-create checks...
	I0805 16:51:55.617040    6251 main.go:141] libmachine: (force-systemd-env-870000) Calling .PreCreateCheck
	I0805 16:51:55.617120    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:55.617256    6251 main.go:141] libmachine: (force-systemd-env-870000) Calling .GetConfigRaw
	I0805 16:51:55.647922    6251 main.go:141] libmachine: Creating machine...
	I0805 16:51:55.647931    6251 main.go:141] libmachine: (force-systemd-env-870000) Calling .Create
	I0805 16:51:55.648022    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:55.648177    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | I0805 16:51:55.648014    6268 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:51:55.648207    6251 main.go:141] libmachine: (force-systemd-env-870000) Downloading /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1122/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 16:51:55.854341    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | I0805 16:51:55.854231    6268 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/id_rsa...
	I0805 16:51:55.914974    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | I0805 16:51:55.914902    6268 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/force-systemd-env-870000.rawdisk...
	I0805 16:51:55.914986    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Writing magic tar header
	I0805 16:51:55.914995    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Writing SSH key tar header
	I0805 16:51:55.915566    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | I0805 16:51:55.915522    6268 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000 ...
	I0805 16:51:56.287569    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:56.287588    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/hyperkit.pid
	I0805 16:51:56.287599    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Using UUID b7aca4e7-507a-430e-bae4-c7f59904688a
	I0805 16:51:56.313845    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Generated MAC 2a:8e:28:55:24:f9
	I0805 16:51:56.313883    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-870000
	I0805 16:51:56.313936    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:51:56 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b7aca4e7-507a-430e-bae4-c7f59904688a", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:51:56.313971    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:51:56 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"b7aca4e7-507a-430e-bae4-c7f59904688a", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001e0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:51:56.314037    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:51:56 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "b7aca4e7-507a-430e-bae4-c7f59904688a", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/force-systemd-env-870000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-sys
temd-env-870000/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-870000"}
	I0805 16:51:56.314088    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:51:56 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U b7aca4e7-507a-430e-bae4-c7f59904688a -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/force-systemd-env-870000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/bzimage,/Users/jenkins/minikube-integration/19
373-1122/.minikube/machines/force-systemd-env-870000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-870000"
	I0805 16:51:56.314100    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:51:56 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:51:56.317059    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:51:56 DEBUG: hyperkit: Pid is 6269
	I0805 16:51:56.318382    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 0
	I0805 16:51:56.318397    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:56.318479    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:51:56.319716    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for 2a:8e:28:55:24:f9 in /var/db/dhcpd_leases ...
	I0805 16:51:56.319770    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:51:56.319793    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:51:56.319813    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:51:56.319828    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:51:56.319838    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:51:56.319844    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:51:56.319852    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:51:56.319858    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:51:56.319865    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:51:56.319873    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:51:56.319880    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:51:56.319886    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:51:56.319893    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:51:56.319902    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:51:56.319915    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:51:56.319924    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:51:56.319931    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:51:56.319939    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:51:56.325081    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:51:56 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:51:56.333380    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:51:56 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:51:56.334440    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:51:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:51:56.334466    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:51:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:51:56.334496    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:51:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:51:56.334537    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:51:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:51:56.710152    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:51:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:51:56.710168    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:51:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:51:56.824756    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:51:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:51:56.824790    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:51:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:51:56.824818    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:51:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:51:56.824867    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:51:56 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:51:56.825675    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:51:56 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:51:56.825685    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:51:56 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:51:58.320371    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 1
	I0805 16:51:58.320385    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:51:58.320491    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:51:58.321249    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for 2a:8e:28:55:24:f9 in /var/db/dhcpd_leases ...
	I0805 16:51:58.321319    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:51:58.321327    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:51:58.321340    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:51:58.321345    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:51:58.321353    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:51:58.321359    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:51:58.321367    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:51:58.321376    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:51:58.321382    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:51:58.321388    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:51:58.321396    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:51:58.321402    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:51:58.321408    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:51:58.321416    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:51:58.321433    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:51:58.321450    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:51:58.321458    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:51:58.321469    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:52:00.322034    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 2
	I0805 16:52:00.322051    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:00.322152    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:52:00.322936    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for 2a:8e:28:55:24:f9 in /var/db/dhcpd_leases ...
	I0805 16:52:00.322991    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:52:00.323002    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:52:00.323009    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:52:00.323015    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:52:00.323023    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:52:00.323029    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:52:00.323036    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:52:00.323041    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:52:00.323054    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:52:00.323061    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:52:00.323075    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:52:00.323097    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:52:00.323105    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:52:00.323115    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:52:00.323123    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:52:00.323134    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:52:00.323142    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:52:00.323150    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:52:02.183369    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:52:02 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0805 16:52:02.183509    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:52:02 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0805 16:52:02.183518    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:52:02 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0805 16:52:02.203104    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:52:02 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0805 16:52:02.324519    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 3
	I0805 16:52:02.324543    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:02.324761    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:52:02.326211    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for 2a:8e:28:55:24:f9 in /var/db/dhcpd_leases ...
	I0805 16:52:02.326339    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:52:02.326361    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:52:02.326421    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:52:02.326455    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:52:02.326472    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:52:02.326483    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:52:02.326495    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:52:02.326505    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:52:02.326517    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:52:02.326535    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:52:02.326552    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:52:02.326563    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:52:02.326574    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:52:02.326593    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:52:02.326605    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:52:02.326614    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:52:02.326624    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:52:02.326635    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:52:04.326644    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 4
	I0805 16:52:04.326661    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:04.326764    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:52:04.327540    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for 2a:8e:28:55:24:f9 in /var/db/dhcpd_leases ...
	I0805 16:52:04.327603    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:52:04.327614    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:52:04.327624    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:52:04.327631    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:52:04.327658    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:52:04.327679    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:52:04.327692    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:52:04.327704    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:52:04.327716    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:52:04.327724    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:52:04.327730    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:52:04.327737    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:52:04.327746    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:52:04.327756    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:52:04.327762    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:52:04.327770    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:52:04.327776    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:52:04.327783    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:52:06.328583    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 5
	I0805 16:52:06.328600    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:06.328667    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:52:06.329431    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for 2a:8e:28:55:24:f9 in /var/db/dhcpd_leases ...
	I0805 16:52:06.329475    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:52:06.329484    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:52:06.329492    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:52:06.329500    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:52:06.329530    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:52:06.329543    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:52:06.329552    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:52:06.329558    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:52:06.329572    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:52:06.329580    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:52:06.329588    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:52:06.329604    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:52:06.329613    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:52:06.329620    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:52:06.329628    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:52:06.329642    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:52:06.329658    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:52:06.329675    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:52:08.331214    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 6
	I0805 16:52:08.331230    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:08.331375    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:52:08.332156    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for 2a:8e:28:55:24:f9 in /var/db/dhcpd_leases ...
	I0805 16:52:08.332190    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:52:08.332200    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:52:08.332209    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:52:08.332230    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:52:08.332250    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:52:08.332264    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:52:08.332278    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:52:08.332291    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:52:08.332301    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:52:08.332313    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:52:08.332323    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:52:08.332329    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:52:08.332345    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:52:08.332356    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:52:08.332364    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:52:08.332370    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:52:08.332377    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:52:08.332385    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:52:10.334421    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 7
	I0805 16:52:10.334434    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:10.334500    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:52:10.335322    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for 2a:8e:28:55:24:f9 in /var/db/dhcpd_leases ...
	I0805 16:52:10.335377    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:52:10.335388    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:52:10.335410    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:52:10.335423    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:52:10.335434    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:52:10.335443    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:52:10.335461    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:52:10.335472    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:52:10.335480    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:52:10.335488    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:52:10.335495    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:52:10.335503    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:52:10.335509    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:52:10.335516    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:52:10.335528    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:52:10.335536    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:52:10.335544    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:52:10.335552    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:52:12.336168    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 8
	I0805 16:52:12.336182    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:12.336321    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:52:12.337096    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for 2a:8e:28:55:24:f9 in /var/db/dhcpd_leases ...
	I0805 16:52:12.337146    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:52:12.337159    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:52:12.337176    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:52:12.337187    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:52:12.337212    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:52:12.337232    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:52:12.337244    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:52:12.337257    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:52:12.337265    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:52:12.337273    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:52:12.337289    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:52:12.337304    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:52:12.337313    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:52:12.337321    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:52:12.337331    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:52:12.337340    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:52:12.337347    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:52:12.337353    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:52:14.338802    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 9
	I0805 16:52:14.338829    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:14.338954    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:52:14.339743    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for 2a:8e:28:55:24:f9 in /var/db/dhcpd_leases ...
	I0805 16:52:14.339795    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:52:14.339806    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:52:14.339824    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:52:14.339837    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:52:14.339846    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:52:14.339852    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:52:14.339863    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:52:14.339872    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:52:14.339880    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:52:14.339886    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:52:14.339892    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:52:14.339899    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:52:14.339907    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:52:14.339922    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:52:14.339934    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:52:14.339944    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:52:14.339952    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:52:14.339960    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:52:16.339966    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 10
	I0805 16:52:16.339981    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:16.340041    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:52:16.340813    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for 2a:8e:28:55:24:f9 in /var/db/dhcpd_leases ...
	I0805 16:52:16.340865    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:52:16.340880    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:52:16.340892    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:52:16.340899    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:52:16.340908    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:52:16.340921    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:52:16.340929    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:52:16.340948    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:52:16.340958    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:52:16.340966    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:52:16.340974    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:52:16.340988    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:52:16.340995    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:52:16.341002    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:52:16.341007    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:52:16.341026    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:52:16.341039    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:52:16.341049    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:52:18.342786    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 11
	I0805 16:52:18.342812    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:18.342896    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:52:18.343674    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for 2a:8e:28:55:24:f9 in /var/db/dhcpd_leases ...
	I0805 16:52:18.343706    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:52:18.343727    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:52:18.343736    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:52:18.343743    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:52:18.343751    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:52:18.343760    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:52:18.343768    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:52:18.343774    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:52:18.343780    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:52:18.343787    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:52:18.343795    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:52:18.343802    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:52:18.343812    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:52:18.343824    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:52:18.343831    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:52:18.343839    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:52:18.343848    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:52:18.343854    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:52:20.345458    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 12
	I0805 16:52:20.345473    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:20.345592    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:52:20.346364    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for 2a:8e:28:55:24:f9 in /var/db/dhcpd_leases ...
	I0805 16:52:20.346418    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:52:20.346430    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:52:20.346439    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:52:20.346446    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:52:20.346476    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:52:20.346491    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:52:20.346502    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:52:20.346511    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:52:20.346522    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:52:20.346535    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:52:20.346545    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:52:20.346552    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:52:20.346560    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:52:20.346566    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:52:20.346572    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:52:20.346578    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:52:20.346617    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:52:20.346631    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:52:22.347444    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 13
	I0805 16:52:22.347469    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:22.347548    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:52:22.348341    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for 2a:8e:28:55:24:f9 in /var/db/dhcpd_leases ...
	I0805 16:52:22.348374    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:52:22.348398    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:52:22.348408    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:52:22.348415    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:52:22.348422    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:52:22.348429    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:52:22.348443    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:52:22.348453    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:52:22.348470    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:52:22.348479    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:52:22.348489    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:52:22.348498    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:52:22.348505    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:52:22.348512    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:52:22.348519    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:52:22.348528    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:52:22.348546    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:52:22.348560    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:52:24.349485    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 14
	I0805 16:52:24.349501    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:24.349585    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:52:24.350355    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for 2a:8e:28:55:24:f9 in /var/db/dhcpd_leases ...
	I0805 16:52:24.350394    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:52:24.350410    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:52:24.350426    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:52:24.350442    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:52:24.350463    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:52:24.350476    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:52:24.350484    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:52:24.350493    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:52:24.350500    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:52:24.350507    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:52:24.350514    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:52:24.350522    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:52:24.350529    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:52:24.350536    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:52:24.350564    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:52:24.350583    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:52:24.350592    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:52:24.350603    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:52:26.351200    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 15
	I0805 16:52:26.351212    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:26.351291    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:52:26.352076    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for 2a:8e:28:55:24:f9 in /var/db/dhcpd_leases ...
	I0805 16:52:26.352124    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:52:26.352133    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:52:26.352149    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:52:26.352157    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:52:26.352165    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:52:26.352171    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:52:26.352177    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:52:26.352187    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:52:26.352203    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:52:26.352217    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:52:26.352228    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:52:26.352237    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:52:26.352244    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:52:26.352253    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:52:26.352259    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:52:26.352267    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:52:26.352275    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:52:26.352282    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:52:28.352505    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 16
	I0805 16:52:28.352518    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:28.352644    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:52:28.353440    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for 2a:8e:28:55:24:f9 in /var/db/dhcpd_leases ...
	I0805 16:52:28.353485    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:52:28.353499    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:52:28.353510    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:52:28.353519    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:52:28.353527    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:52:28.353534    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:52:28.353542    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:52:28.353554    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:52:28.353562    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:52:28.353570    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:52:28.353577    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:52:28.353592    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:52:28.353600    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:52:28.353607    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:52:28.353615    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:52:28.353622    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:52:28.353628    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:52:28.353640    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:52:30.354304    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 17
	I0805 16:52:30.354320    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:30.354470    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:52:30.355266    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for 2a:8e:28:55:24:f9 in /var/db/dhcpd_leases ...
	I0805 16:52:30.355315    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:52:30.355327    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:52:30.355335    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:52:30.355342    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:52:30.355350    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:52:30.355356    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:52:30.355363    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:52:30.355371    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:52:30.355385    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:52:30.355399    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:52:30.355408    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:52:30.355416    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:52:30.355424    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:52:30.355432    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:52:30.355450    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:52:30.355462    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:52:30.355470    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:52:30.355481    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:52:32.355756    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 18
	I0805 16:52:32.355780    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:32.355885    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:52:32.356635    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for 2a:8e:28:55:24:f9 in /var/db/dhcpd_leases ...
	I0805 16:52:32.356697    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:52:32.356708    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:52:32.356720    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:52:32.356731    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:52:32.356740    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:52:32.356749    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:52:32.356765    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:52:32.356779    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:52:32.356787    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:52:32.356794    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:52:32.356806    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:52:32.356816    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:52:32.356825    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:52:32.356834    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:52:32.356841    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:52:32.356849    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:52:32.356859    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:52:32.356889    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:52:34.358879    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 19
	I0805 16:52:34.358893    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:34.358984    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:52:34.359815    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for 2a:8e:28:55:24:f9 in /var/db/dhcpd_leases ...
	I0805 16:52:34.359878    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:52:34.359889    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:52:34.359897    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:52:34.359906    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:52:34.359915    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:52:34.359923    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:52:34.359930    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:52:34.359941    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:52:34.359951    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:52:34.359958    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:52:34.359965    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:52:34.359973    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:52:34.359980    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:52:34.359989    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:52:34.359995    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:52:34.360005    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:52:34.360012    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:52:34.360020    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:52:36.362058    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 20
	I0805 16:52:36.362083    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:36.362141    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:52:36.362911    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for 2a:8e:28:55:24:f9 in /var/db/dhcpd_leases ...
	I0805 16:52:36.362948    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:52:36.362956    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:52:36.362967    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:52:36.362976    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:52:36.362991    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:52:36.363007    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:52:36.363016    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:52:36.363025    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:52:36.363040    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:52:36.363050    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:52:36.363059    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:52:36.363067    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:52:36.363082    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:52:36.363095    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:52:36.363103    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:52:36.363117    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:52:36.363131    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:52:36.363151    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:52:38.363392    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 21
	I0805 16:52:38.363417    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:38.363526    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:52:38.364306    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for 2a:8e:28:55:24:f9 in /var/db/dhcpd_leases ...
	I0805 16:52:38.364354    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:52:38.364383    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:52:38.364396    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:52:38.364406    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:52:38.364414    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:52:38.364423    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:52:38.364431    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:52:38.364437    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:52:38.364452    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:52:38.364461    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:52:38.364468    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:52:38.364476    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:52:38.364485    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:52:38.364493    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:52:38.364500    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:52:38.364506    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:52:38.364519    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:52:38.364532    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:52:40.365101    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 22
	I0805 16:52:40.365115    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:40.365233    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:52:40.366148    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for 2a:8e:28:55:24:f9 in /var/db/dhcpd_leases ...
	I0805 16:52:40.366173    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:52:40.366181    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:52:40.366189    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:52:40.366198    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:52:40.366205    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:52:40.366214    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:52:40.366221    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:52:40.366227    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:52:40.366242    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:52:40.366253    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:52:40.366263    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:52:40.366270    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:52:40.366277    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:52:40.366289    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:52:40.366296    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:52:40.366302    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:52:40.366314    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:52:40.366329    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:52:42.366549    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 23
	I0805 16:52:42.366573    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:42.366693    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:52:42.367461    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for 2a:8e:28:55:24:f9 in /var/db/dhcpd_leases ...
	I0805 16:52:42.367511    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:52:42.367522    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:52:42.367540    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:52:42.367551    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:52:42.367560    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:52:42.367567    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:52:42.367586    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:52:42.367599    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:52:42.367613    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:52:42.367627    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:52:42.367644    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:52:42.367656    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:52:42.367665    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:52:42.367674    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:52:42.367682    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:52:42.367690    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:52:42.367698    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:52:42.367704    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:52:44.367724    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 24
	I0805 16:52:44.367741    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:44.367838    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:52:44.368628    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for 2a:8e:28:55:24:f9 in /var/db/dhcpd_leases ...
	I0805 16:52:44.368670    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:52:44.368680    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:52:44.368689    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:52:44.368696    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:52:44.368709    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:52:44.368715    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:52:44.368723    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:52:44.368730    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:52:44.368737    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:52:44.368743    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:52:44.368751    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:52:44.368761    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:52:44.368784    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:52:44.368800    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:52:44.368809    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:52:44.368817    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:52:44.368824    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:52:44.368830    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:52:46.369225    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 25
	I0805 16:52:46.369240    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:46.369294    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:52:46.370111    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for 2a:8e:28:55:24:f9 in /var/db/dhcpd_leases ...
	I0805 16:52:46.370165    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:52:46.370178    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:52:46.370195    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:52:46.370206    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:52:46.370214    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:52:46.370221    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:52:46.370227    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:52:46.370235    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:52:46.370250    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:52:46.370259    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:52:46.370267    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:52:46.370276    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:52:46.370284    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:52:46.370292    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:52:46.370299    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:52:46.370306    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:52:46.370313    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:52:46.370320    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:52:48.370350    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 26
	I0805 16:52:48.370371    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:48.370420    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:52:48.371214    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for 2a:8e:28:55:24:f9 in /var/db/dhcpd_leases ...
	I0805 16:52:48.371270    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:52:48.371280    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:52:48.371302    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:52:48.371309    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:52:48.371315    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:52:48.371321    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:52:48.371329    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:52:48.371336    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:52:48.371343    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:52:48.371351    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:52:48.371363    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:52:48.371375    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:52:48.371383    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:52:48.371392    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:52:48.371399    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:52:48.371407    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:52:48.371414    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:52:48.371422    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:52:50.372942    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 27
	I0805 16:52:50.372973    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:50.373054    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:52:50.374172    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for 2a:8e:28:55:24:f9 in /var/db/dhcpd_leases ...
	I0805 16:52:50.374204    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:52:50.374219    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:52:50.374230    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:52:50.374237    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:52:50.374244    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:52:50.374251    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:52:50.374257    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:52:50.374264    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:52:50.374271    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:52:50.374277    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:52:50.374285    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:52:50.374292    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:52:50.374300    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:52:50.374308    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:52:50.374316    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:52:50.374334    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:52:50.374342    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:52:50.374357    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:52:52.374391    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 28
	I0805 16:52:52.374407    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:52.374470    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:52:52.375328    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for 2a:8e:28:55:24:f9 in /var/db/dhcpd_leases ...
	I0805 16:52:52.375383    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:52:52.375394    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:52:52.375403    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:52:52.375416    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:52:52.375428    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:52:52.375435    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:52:52.375451    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:52:52.375462    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:52:52.375471    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:52:52.375480    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:52:52.375489    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:52:52.375498    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:52:52.375505    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:52:52.375513    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:52:52.375520    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:52:52.375527    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:52:52.375549    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:52:52.375563    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:52:54.375789    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 29
	I0805 16:52:54.375812    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:54.375903    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:52:54.376686    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for 2a:8e:28:55:24:f9 in /var/db/dhcpd_leases ...
	I0805 16:52:54.376747    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:52:54.376761    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:52:54.376776    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:52:54.376807    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:52:54.376827    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:52:54.376835    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:52:54.376854    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:52:54.376880    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:52:54.376895    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:52:54.376902    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:52:54.376910    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:52:54.376917    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:52:54.376925    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:52:54.376940    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:52:54.376952    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:52:54.376961    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:52:54.376970    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:52:54.376987    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:52:56.378973    6251 client.go:171] duration metric: took 1m0.761945632s to LocalClient.Create
	I0805 16:52:58.381042    6251 start.go:128] duration metric: took 1m2.797932519s to createHost
	I0805 16:52:58.381058    6251 start.go:83] releasing machines lock for "force-systemd-env-870000", held for 1m2.798043693s
	W0805 16:52:58.381076    6251 start.go:714] error starting host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 2a:8e:28:55:24:f9
	I0805 16:52:58.381483    6251 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:52:58.381509    6251 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:52:58.390528    6251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53673
	I0805 16:52:58.390994    6251 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:52:58.391445    6251 main.go:141] libmachine: Using API Version  1
	I0805 16:52:58.391459    6251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:52:58.391717    6251 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:52:58.392061    6251 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:52:58.392084    6251 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:52:58.400645    6251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53675
	I0805 16:52:58.401033    6251 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:52:58.401451    6251 main.go:141] libmachine: Using API Version  1
	I0805 16:52:58.401470    6251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:52:58.401676    6251 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:52:58.401805    6251 main.go:141] libmachine: (force-systemd-env-870000) Calling .GetState
	I0805 16:52:58.401894    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:58.401958    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:52:58.402921    6251 main.go:141] libmachine: (force-systemd-env-870000) Calling .DriverName
	I0805 16:52:58.423528    6251 out.go:177] * Deleting "force-systemd-env-870000" in hyperkit ...
	I0805 16:52:58.481301    6251 main.go:141] libmachine: (force-systemd-env-870000) Calling .Remove
	I0805 16:52:58.481417    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:58.481429    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:58.481493    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:52:58.482423    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:58.482490    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | waiting for graceful shutdown
	I0805 16:52:59.484655    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:52:59.484748    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:52:59.485637    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | waiting for graceful shutdown
	I0805 16:53:00.485989    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:00.486046    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:53:00.487678    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | waiting for graceful shutdown
	I0805 16:53:01.489082    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:01.489170    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:53:01.489892    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | waiting for graceful shutdown
	I0805 16:53:02.490861    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:02.490960    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:53:02.491553    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | waiting for graceful shutdown
	I0805 16:53:03.492034    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:03.492116    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6269
	I0805 16:53:03.493268    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | sending sigkill
	I0805 16:53:03.493279    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:53:03.502961    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:53:03 WARN : hyperkit: failed to read stdout: EOF
	I0805 16:53:03.502987    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:53:03 WARN : hyperkit: failed to read stderr: EOF
	W0805 16:53:03.518457    6251 out.go:239] ! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 2a:8e:28:55:24:f9
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 2a:8e:28:55:24:f9
	I0805 16:53:03.518479    6251 start.go:729] Will try again in 5 seconds ...
	I0805 16:53:08.519744    6251 start.go:360] acquireMachinesLock for force-systemd-env-870000: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:54:01.288105    6251 start.go:364] duration metric: took 52.768111312s to acquireMachinesLock for "force-systemd-env-870000"
	I0805 16:54:01.288136    6251 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-870000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.30.3 ClusterName:force-systemd-env-870000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:54:01.288186    6251 start.go:125] createHost starting for "" (driver="hyperkit")
	I0805 16:54:01.329165    6251 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0805 16:54:01.329249    6251 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:54:01.329274    6251 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:54:01.338570    6251 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53679
	I0805 16:54:01.339047    6251 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:54:01.339545    6251 main.go:141] libmachine: Using API Version  1
	I0805 16:54:01.339588    6251 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:54:01.339891    6251 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:54:01.340007    6251 main.go:141] libmachine: (force-systemd-env-870000) Calling .GetMachineName
	I0805 16:54:01.340191    6251 main.go:141] libmachine: (force-systemd-env-870000) Calling .DriverName
	I0805 16:54:01.340339    6251 start.go:159] libmachine.API.Create for "force-systemd-env-870000" (driver="hyperkit")
	I0805 16:54:01.340364    6251 client.go:168] LocalClient.Create starting
	I0805 16:54:01.340391    6251 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem
	I0805 16:54:01.340447    6251 main.go:141] libmachine: Decoding PEM data...
	I0805 16:54:01.340460    6251 main.go:141] libmachine: Parsing certificate...
	I0805 16:54:01.340504    6251 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem
	I0805 16:54:01.340542    6251 main.go:141] libmachine: Decoding PEM data...
	I0805 16:54:01.340554    6251 main.go:141] libmachine: Parsing certificate...
	I0805 16:54:01.340567    6251 main.go:141] libmachine: Running pre-create checks...
	I0805 16:54:01.340573    6251 main.go:141] libmachine: (force-systemd-env-870000) Calling .PreCreateCheck
	I0805 16:54:01.340697    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:54:01.340723    6251 main.go:141] libmachine: (force-systemd-env-870000) Calling .GetConfigRaw
	I0805 16:54:01.349646    6251 main.go:141] libmachine: Creating machine...
	I0805 16:54:01.349655    6251 main.go:141] libmachine: (force-systemd-env-870000) Calling .Create
	I0805 16:54:01.349748    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:54:01.349916    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | I0805 16:54:01.349743    6301 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:54:01.349982    6251 main.go:141] libmachine: (force-systemd-env-870000) Downloading /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1122/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 16:54:01.671821    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | I0805 16:54:01.671751    6301 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/id_rsa...
	I0805 16:54:01.768560    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | I0805 16:54:01.768504    6301 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/force-systemd-env-870000.rawdisk...
	I0805 16:54:01.768574    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Writing magic tar header
	I0805 16:54:01.768584    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Writing SSH key tar header
	I0805 16:54:01.768914    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | I0805 16:54:01.768882    6301 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000 ...
	I0805 16:54:02.146726    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:54:02.146781    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/hyperkit.pid
	I0805 16:54:02.146818    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Using UUID 9d4afada-f7ac-45c4-bb5f-048d2cff9154
	I0805 16:54:02.173032    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Generated MAC ca:ff:4e:4e:a9:28
	I0805 16:54:02.173052    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-870000
	I0805 16:54:02.173084    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:54:02 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9d4afada-f7ac-45c4-bb5f-048d2cff9154", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000198630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:54:02.173109    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:54:02 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9d4afada-f7ac-45c4-bb5f-048d2cff9154", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000198630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/initrd", Bootrom:"", CPUs:2, Memory:2048, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]str
ing(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:54:02.173159    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:54:02 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/hyperkit.pid", "-c", "2", "-m", "2048M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9d4afada-f7ac-45c4-bb5f-048d2cff9154", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/force-systemd-env-870000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-sys
temd-env-870000/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-870000"}
	I0805 16:54:02.173198    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:54:02 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/hyperkit.pid -c 2 -m 2048M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9d4afada-f7ac-45c4-bb5f-048d2cff9154 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/force-systemd-env-870000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/bzimage,/Users/jenkins/minikube-integration/19
373-1122/.minikube/machines/force-systemd-env-870000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=force-systemd-env-870000"
	I0805 16:54:02.173220    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:54:02 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:54:02.176148    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:54:02 DEBUG: hyperkit: Pid is 6311
	I0805 16:54:02.177279    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 0
	I0805 16:54:02.177294    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:54:02.177359    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6311
	I0805 16:54:02.178270    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for ca:ff:4e:4e:a9:28 in /var/db/dhcpd_leases ...
	I0805 16:54:02.178321    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:54:02.178335    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:54:02.178351    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:54:02.178362    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:54:02.178370    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:54:02.178378    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:54:02.178406    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:54:02.178418    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:54:02.178433    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:54:02.178453    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:54:02.178463    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:54:02.178471    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:54:02.178478    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:54:02.178485    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:54:02.178493    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:54:02.178502    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:54:02.178510    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:54:02.178517    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:54:02.184355    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:54:02 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:54:02.192498    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:54:02 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/force-systemd-env-870000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:54:02.193466    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:54:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:54:02.193481    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:54:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:54:02.193489    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:54:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:54:02.193500    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:54:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:54:02.572961    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:54:02 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:54:02.572994    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:54:02 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:54:02.687508    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:54:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:54:02.687535    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:54:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:54:02.687569    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:54:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:54:02.687610    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:54:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:54:02.688397    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:54:02 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:54:02.688409    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:54:02 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:54:04.179476    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 1
	I0805 16:54:04.179491    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:54:04.179581    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6311
	I0805 16:54:04.180439    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for ca:ff:4e:4e:a9:28 in /var/db/dhcpd_leases ...
	I0805 16:54:04.180468    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:54:04.180491    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:54:04.180505    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:54:04.180523    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:54:04.180529    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:54:04.180538    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:54:04.180545    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:54:04.180551    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:54:04.180558    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:54:04.180566    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:54:04.180573    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:54:04.180578    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:54:04.180585    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:54:04.180592    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:54:04.180598    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:54:04.180606    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:54:04.180612    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:54:04.180637    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:54:06.181116    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 2
	I0805 16:54:06.181135    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:54:06.181218    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6311
	I0805 16:54:06.182012    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for ca:ff:4e:4e:a9:28 in /var/db/dhcpd_leases ...
	I0805 16:54:06.182052    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:54:06.182062    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:54:06.182074    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:54:06.182080    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:54:06.182096    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:54:06.182104    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:54:06.182113    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:54:06.182122    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:54:06.182129    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:54:06.182141    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:54:06.182151    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:54:06.182168    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:54:06.182181    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:54:06.182190    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:54:06.182198    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:54:06.182205    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:54:06.182212    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:54:06.182224    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:54:08.078100    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:54:08 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0805 16:54:08.078243    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:54:08 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0805 16:54:08.078252    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:54:08 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0805 16:54:08.098713    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | 2024/08/05 16:54:08 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0805 16:54:08.183765    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 3
	I0805 16:54:08.183793    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:54:08.183955    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6311
	I0805 16:54:08.185407    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for ca:ff:4e:4e:a9:28 in /var/db/dhcpd_leases ...
	I0805 16:54:08.185528    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:54:08.185552    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:54:08.185569    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:54:08.185583    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:54:08.185634    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:54:08.185677    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:54:08.185733    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:54:08.185769    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:54:08.185779    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:54:08.185788    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:54:08.185824    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:54:08.185841    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:54:08.185862    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:54:08.185874    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:54:08.185887    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:54:08.185896    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:54:08.185904    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:54:08.185917    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:54:10.185731    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 4
	I0805 16:54:10.185749    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:54:10.185836    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6311
	I0805 16:54:10.186612    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for ca:ff:4e:4e:a9:28 in /var/db/dhcpd_leases ...
	I0805 16:54:10.186679    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:54:10.186692    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:54:10.186704    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:54:10.186718    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:54:10.186728    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:54:10.186735    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:54:10.186744    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:54:10.186751    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:54:10.186758    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:54:10.186783    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:54:10.186794    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:54:10.186802    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:54:10.186809    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:54:10.186819    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:54:10.186828    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:54:10.186836    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:54:10.186843    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:54:10.186854    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:54:12.188850    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 5
	I0805 16:54:12.188863    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:54:12.188915    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6311
	I0805 16:54:12.189702    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for ca:ff:4e:4e:a9:28 in /var/db/dhcpd_leases ...
	I0805 16:54:12.189746    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:54:12.189761    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:54:12.189778    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:54:12.189785    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:54:12.189792    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:54:12.189806    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:54:12.189832    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:54:12.189844    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:54:12.189851    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:54:12.189860    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:54:12.189868    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:54:12.189876    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:54:12.189883    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:54:12.189891    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:54:12.189899    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:54:12.189907    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:54:12.189914    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:54:12.189921    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:54:14.190961    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 6
	I0805 16:54:14.190994    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:54:14.191110    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6311
	I0805 16:54:14.191904    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for ca:ff:4e:4e:a9:28 in /var/db/dhcpd_leases ...
	I0805 16:54:14.191951    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:54:14.191964    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:54:14.191973    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:54:14.191980    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:54:14.191987    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:54:14.191995    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:54:14.192004    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:54:14.192011    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:54:14.192028    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:54:14.192040    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:54:14.192049    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:54:14.192058    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:54:14.192068    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:54:14.192077    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:54:14.192084    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:54:14.192090    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:54:14.192097    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:54:14.192102    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:54:16.194120    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 7
	I0805 16:54:16.194135    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:54:16.194230    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6311
	I0805 16:54:16.195015    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for ca:ff:4e:4e:a9:28 in /var/db/dhcpd_leases ...
	I0805 16:54:16.195060    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:54:16.195070    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:54:16.195084    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:54:16.195101    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:54:16.195109    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:54:16.195115    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:54:16.195123    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:54:16.195132    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:54:16.195148    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:54:16.195160    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:54:16.195172    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:54:16.195180    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:54:16.195197    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:54:16.195210    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:54:16.195218    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:54:16.195226    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:54:16.195233    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:54:16.195241    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:54:18.196607    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 8
	I0805 16:54:18.196621    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:54:18.196729    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6311
	I0805 16:54:18.197581    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for ca:ff:4e:4e:a9:28 in /var/db/dhcpd_leases ...
	I0805 16:54:18.197615    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:54:18.197628    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:54:18.197663    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:54:18.197674    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:54:18.197682    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:54:18.197692    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:54:18.197700    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:54:18.197708    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:54:18.197729    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:54:18.197738    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:54:18.197747    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:54:18.197754    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:54:18.197759    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:54:18.197767    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:54:18.197778    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:54:18.197786    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:54:18.197794    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:54:18.197803    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:54:20.199822    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 9
	I0805 16:54:20.199838    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:54:20.199971    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6311
	I0805 16:54:20.200786    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for ca:ff:4e:4e:a9:28 in /var/db/dhcpd_leases ...
	I0805 16:54:20.200803    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:54:20.200821    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:54:20.200832    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:54:20.200847    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:54:20.200858    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:54:20.200866    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:54:20.200875    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:54:20.200881    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:54:20.200891    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:54:20.200901    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:54:20.200910    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:54:20.200917    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:54:20.200925    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:54:20.200934    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:54:20.200941    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:54:20.200949    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:54:20.200957    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:54:20.200980    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:54:22.201880    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 10
	I0805 16:54:22.201894    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:54:22.201943    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6311
	I0805 16:54:22.202831    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for ca:ff:4e:4e:a9:28 in /var/db/dhcpd_leases ...
	I0805 16:54:22.202883    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:54:22.202894    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:54:22.202901    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:54:22.202915    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:54:22.202934    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:54:22.202953    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:54:22.202961    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:54:22.202970    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:54:22.202992    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:54:22.203005    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:54:22.203021    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:54:22.203030    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:54:22.203042    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:54:22.203052    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:54:22.203067    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:54:22.203075    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:54:22.203092    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:54:22.203105    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:54:24.203982    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 11
	I0805 16:54:24.204000    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:54:24.204116    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6311
	I0805 16:54:24.204924    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for ca:ff:4e:4e:a9:28 in /var/db/dhcpd_leases ...
	I0805 16:54:24.204979    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:54:24.204991    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:54:24.205000    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:54:24.205007    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:54:24.205014    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:54:24.205023    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:54:24.205030    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:54:24.205037    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:54:24.205052    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:54:24.205059    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:54:24.205067    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:54:24.205076    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:54:24.205083    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:54:24.205090    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:54:24.205108    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:54:24.205120    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:54:24.205128    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:54:24.205139    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:54:26.205155    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 12
	I0805 16:54:26.205171    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:54:26.205239    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6311
	I0805 16:54:26.206021    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for ca:ff:4e:4e:a9:28 in /var/db/dhcpd_leases ...
	I0805 16:54:26.206044    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:54:26.206051    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:54:26.206067    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:54:26.206077    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:54:26.206092    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:54:26.206100    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:54:26.206113    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:54:26.206125    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:54:26.206135    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:54:26.206144    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:54:26.206151    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:54:26.206159    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:54:26.206177    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:54:26.206185    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:54:26.206201    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:54:26.206213    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:54:26.206222    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:54:26.206230    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:54:28.208231    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 13
	I0805 16:54:28.208249    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:54:28.208354    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6311
	I0805 16:54:28.209161    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for ca:ff:4e:4e:a9:28 in /var/db/dhcpd_leases ...
	I0805 16:54:28.209210    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:54:28.209223    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:54:28.209261    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:54:28.209274    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:54:28.209282    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:54:28.209291    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:54:28.209298    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:54:28.209307    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:54:28.209314    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:54:28.209322    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:54:28.209338    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:54:28.209364    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:54:28.209372    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:54:28.209381    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:54:28.209390    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:54:28.209399    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:54:28.209407    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:54:28.209420    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:54:30.209415    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 14
	I0805 16:54:30.209429    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:54:30.209530    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6311
	I0805 16:54:30.210291    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for ca:ff:4e:4e:a9:28 in /var/db/dhcpd_leases ...
	I0805 16:54:30.210339    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:54:30.210348    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:54:30.210357    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:54:30.210363    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:54:30.210385    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:54:30.210400    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:54:30.210411    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:54:30.210421    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:54:30.210443    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:54:30.210457    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:54:30.210466    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:54:30.210475    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:54:30.210483    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:54:30.210490    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:54:30.210496    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:54:30.210504    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:54:30.210511    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:54:30.210518    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:54:32.210921    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 15
	I0805 16:54:32.210935    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:54:32.210966    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6311
	I0805 16:54:32.211872    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for ca:ff:4e:4e:a9:28 in /var/db/dhcpd_leases ...
	I0805 16:54:32.211911    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:54:32.211919    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:54:32.211927    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:54:32.211938    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:54:32.211944    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:54:32.211958    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:54:32.211965    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:54:32.211972    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:54:32.211981    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:54:32.211987    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:54:32.211994    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:54:32.212016    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:54:32.212028    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:54:32.212039    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:54:32.212049    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:54:32.212056    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:54:32.212064    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:54:32.212073    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:54:34.213687    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 16
	I0805 16:54:34.213703    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:54:34.213713    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6311
	I0805 16:54:34.214816    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for ca:ff:4e:4e:a9:28 in /var/db/dhcpd_leases ...
	I0805 16:54:34.214851    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:54:34.214863    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:54:34.214872    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:54:34.214881    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:54:34.214888    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:54:34.214895    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:54:34.214901    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:54:34.214909    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:54:34.214916    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:54:34.214924    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:54:34.214931    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:54:34.214939    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:54:34.214949    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:54:34.214957    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:54:34.214972    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:54:34.214987    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:54:34.215003    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:54:34.215012    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:54:36.217019    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 17
	I0805 16:54:36.217032    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:54:36.217074    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6311
	I0805 16:54:36.217933    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for ca:ff:4e:4e:a9:28 in /var/db/dhcpd_leases ...
	I0805 16:54:36.217985    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:54:36.217994    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:54:36.218001    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:54:36.218008    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:54:36.218016    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:54:36.218022    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:54:36.218036    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:54:36.218048    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:54:36.218057    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:54:36.218065    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:54:36.218073    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:54:36.218080    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:54:36.218088    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:54:36.218096    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:54:36.218125    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:54:36.218139    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:54:36.218148    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:54:36.218154    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:54:38.219067    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 18
	I0805 16:54:38.219081    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:54:38.219152    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6311
	I0805 16:54:38.219933    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for ca:ff:4e:4e:a9:28 in /var/db/dhcpd_leases ...
	I0805 16:54:38.219978    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:54:38.219990    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:54:38.219999    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:54:38.220006    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:54:38.220013    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:54:38.220020    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:54:38.220027    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:54:38.220034    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:54:38.220047    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:54:38.220059    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:54:38.220067    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:54:38.220073    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:54:38.220087    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:54:38.220101    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:54:38.220111    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:54:38.220118    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:54:38.220140    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:54:38.220153    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:54:40.220178    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 19
	I0805 16:54:40.220191    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:54:40.220312    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6311
	I0805 16:54:40.221071    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for ca:ff:4e:4e:a9:28 in /var/db/dhcpd_leases ...
	I0805 16:54:40.221085    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:54:40.221092    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:54:40.221098    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:54:40.221104    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:54:40.221112    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:54:40.221137    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:54:40.221151    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:54:40.221158    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:54:40.221173    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:54:40.221183    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:54:40.221195    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:54:40.221204    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:54:40.221211    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:54:40.221219    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:54:40.221236    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:54:40.221248    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:54:40.221263    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:54:40.221280    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:54:42.221763    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 20
	I0805 16:54:42.221777    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:54:42.221830    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6311
	I0805 16:54:42.222660    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for ca:ff:4e:4e:a9:28 in /var/db/dhcpd_leases ...
	I0805 16:54:42.222704    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:54:42.222715    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:54:42.222730    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:54:42.222737    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:54:42.222743    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:54:42.222750    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:54:42.222757    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:54:42.222766    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:54:42.222777    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:54:42.222786    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:54:42.222792    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:54:42.222799    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:54:42.222805    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:54:42.222813    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:54:42.222821    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:54:42.222828    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:54:42.222842    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:54:42.222850    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:54:44.223436    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 21
	I0805 16:54:44.223449    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:54:44.223484    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6311
	I0805 16:54:44.224278    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for ca:ff:4e:4e:a9:28 in /var/db/dhcpd_leases ...
	I0805 16:54:44.224303    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:54:44.224311    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:54:44.224321    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:54:44.224327    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:54:44.224341    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:54:44.224352    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:54:44.224359    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:54:44.224365    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:54:44.224384    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:54:44.224396    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:54:44.224405    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:54:44.224420    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:54:44.224428    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:54:44.224439    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:54:44.224446    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:54:44.224453    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:54:44.224460    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:54:44.224477    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:54:46.225602    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 22
	I0805 16:54:46.225629    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:54:46.225724    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6311
	I0805 16:54:46.226495    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for ca:ff:4e:4e:a9:28 in /var/db/dhcpd_leases ...
	I0805 16:54:46.226560    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:54:46.226577    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:54:46.226584    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:54:46.226596    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:54:46.226608    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:54:46.226616    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:54:46.226631    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:54:46.226644    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:54:46.226661    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:54:46.226676    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:54:46.226684    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:54:46.226691    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:54:46.226699    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:54:46.226706    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:54:46.226711    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:54:46.226718    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:54:46.226724    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:54:46.226733    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:54:48.226757    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 23
	I0805 16:54:48.226769    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:54:48.226907    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6311
	I0805 16:54:48.227749    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for ca:ff:4e:4e:a9:28 in /var/db/dhcpd_leases ...
	I0805 16:54:48.227799    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:54:48.227811    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:54:48.227823    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:54:48.227832    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:54:48.227839    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:54:48.227845    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:54:48.227852    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:54:48.227859    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:54:48.227868    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:54:48.227877    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:54:48.227884    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:54:48.227899    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:54:48.227911    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:54:48.227919    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:54:48.227929    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:54:48.227937    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:54:48.227944    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:54:48.227952    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:54:50.229988    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 24
	I0805 16:54:50.230004    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:54:50.230108    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6311
	I0805 16:54:50.231015    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for ca:ff:4e:4e:a9:28 in /var/db/dhcpd_leases ...
	I0805 16:54:50.231062    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:54:50.231075    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:54:50.231084    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:54:50.231091    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:54:50.231098    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:54:50.231107    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:54:50.231129    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:54:50.231138    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:54:50.231145    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:54:50.231153    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:54:50.231165    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:54:50.231178    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:54:50.231186    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:54:50.231195    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:54:50.231203    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:54:50.231210    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:54:50.231219    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:54:50.231227    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:54:52.233230    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 25
	I0805 16:54:52.233245    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:54:52.233285    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6311
	I0805 16:54:52.234128    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for ca:ff:4e:4e:a9:28 in /var/db/dhcpd_leases ...
	I0805 16:54:52.234180    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:54:52.234193    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:54:52.234203    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:54:52.234210    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:54:52.234227    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:54:52.234233    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:54:52.234239    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:54:52.234246    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:54:52.234253    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:54:52.234259    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:54:52.234268    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:54:52.234277    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:54:52.234291    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:54:52.234303    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:54:52.234311    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:54:52.234320    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:54:52.234335    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:54:52.234344    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:54:54.236396    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 26
	I0805 16:54:54.236413    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:54:54.236479    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6311
	I0805 16:54:54.237241    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for ca:ff:4e:4e:a9:28 in /var/db/dhcpd_leases ...
	I0805 16:54:54.237292    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:54:54.237306    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:54:54.237330    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:54:54.237341    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:54:54.237353    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:54:54.237360    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:54:54.237367    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:54:54.237373    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:54:54.237389    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:54:54.237402    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:54:54.237411    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:54:54.237417    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:54:54.237424    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:54:54.237432    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:54:54.237441    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:54:54.237450    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:54:54.237458    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:54:54.237466    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:54:56.239458    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 27
	I0805 16:54:56.239472    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:54:56.239601    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6311
	I0805 16:54:56.240389    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for ca:ff:4e:4e:a9:28 in /var/db/dhcpd_leases ...
	I0805 16:54:56.240442    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:54:56.240453    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:54:56.240462    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:54:56.240471    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:54:56.240483    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:54:56.240490    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:54:56.240497    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:54:56.240503    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:54:56.240515    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:54:56.240530    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:54:56.240539    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:54:56.240550    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:54:56.240559    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:54:56.240568    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:54:56.240575    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:54:56.240583    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:54:56.240597    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:54:56.240611    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:54:58.240657    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 28
	I0805 16:54:58.240673    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:54:58.240726    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6311
	I0805 16:54:58.241576    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for ca:ff:4e:4e:a9:28 in /var/db/dhcpd_leases ...
	I0805 16:54:58.241626    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:54:58.241641    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:54:58.241654    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:54:58.241661    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:54:58.241669    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:54:58.241678    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:54:58.241688    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:54:58.241697    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:54:58.241716    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:54:58.241729    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:54:58.241737    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:54:58.241745    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:54:58.241752    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:54:58.241759    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:54:58.241765    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:54:58.241771    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:54:58.241779    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:54:58.241788    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:55:00.242476    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Attempt 29
	I0805 16:55:00.242504    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:55:00.242529    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | hyperkit pid from json: 6311
	I0805 16:55:00.243552    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Searching for ca:ff:4e:4e:a9:28 in /var/db/dhcpd_leases ...
	I0805 16:55:00.243603    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | Found 17 entries in /var/db/dhcpd_leases!
	I0805 16:55:00.243617    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1a:fc:f3:eb:cb:4b ID:1,1a:fc:f3:eb:cb:4b Lease:0x66b2b679}
	I0805 16:55:00.243630    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:aa:29:c7:aa:64:1b ID:1,aa:29:c7:aa:64:1b Lease:0x66b2b5b9}
	I0805 16:55:00.243640    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:f6:c3:c1:ad:3:7d ID:1,f6:c3:c1:ad:3:7d Lease:0x66b163ce}
	I0805 16:55:00.243648    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:55:00.243659    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:55:00.243673    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b4e0}
	I0805 16:55:00.243686    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:55:00.243694    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:55:00.243702    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:55:00.243718    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:55:00.243732    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:55:00.243749    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:55:00.243760    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:55:00.243779    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:55:00.243789    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:55:00.243797    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:55:00.243804    6251 main.go:141] libmachine: (force-systemd-env-870000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:55:02.244544    6251 client.go:171] duration metric: took 1m0.903956416s to LocalClient.Create
	I0805 16:55:04.245932    6251 start.go:128] duration metric: took 1m2.957512188s to createHost
	I0805 16:55:04.245985    6251 start.go:83] releasing machines lock for "force-systemd-env-870000", held for 1m2.957641605s
	W0805 16:55:04.246057    6251 out.go:239] * Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-870000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ca:ff:4e:4e:a9:28
	* Failed to start hyperkit VM. Running "minikube delete -p force-systemd-env-870000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ca:ff:4e:4e:a9:28
	I0805 16:55:04.310144    6251 out.go:177] 
	W0805 16:55:04.331169    6251 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ca:ff:4e:4e:a9:28
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for ca:ff:4e:4e:a9:28
	W0805 16:55:04.331182    6251 out.go:239] * 
	* 
	W0805 16:55:04.331861    6251 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:55:04.394124    6251 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-amd64 start -p force-systemd-env-870000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-870000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p force-systemd-env-870000 ssh "docker info --format {{.CgroupDriver}}": exit status 50 (191.955604ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node force-systemd-env-870000 endpoint: failed to lookup ip for ""
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-amd64 -p force-systemd-env-870000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 50
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-08-05 16:55:04.697254 -0700 PDT m=+4084.269663472
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-870000 -n force-systemd-env-870000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p force-systemd-env-870000 -n force-systemd-env-870000: exit status 7 (76.917378ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 16:55:04.772319    6342 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0805 16:55:04.772343    6342 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-870000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "force-systemd-env-870000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-870000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-870000: (5.255367619s)
--- FAIL: TestForceSystemdEnv (233.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (246.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-968000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-968000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-968000 -v=7 --alsologtostderr: (27.081857153s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-968000 --wait=true -v=7 --alsologtostderr
E0805 16:09:02.994635    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0805 16:11:19.142208    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0805 16:11:46.834984    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0805 16:11:50.544921    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-968000 --wait=true -v=7 --alsologtostderr: exit status 90 (3m34.869702045s)

                                                
                                                
-- stdout --
	* [ha-968000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "ha-968000" primary control-plane node in "ha-968000" cluster
	* Restarting existing hyperkit VM for "ha-968000" ...
	* Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	* Enabled addons: 
	
	* Starting "ha-968000-m02" control-plane node in "ha-968000" cluster
	* Restarting existing hyperkit VM for "ha-968000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5
	* Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	  - env NO_PROXY=192.169.0.5
	* Verifying Kubernetes components...
	
	* Starting "ha-968000-m03" control-plane node in "ha-968000" cluster
	* Restarting existing hyperkit VM for "ha-968000-m03" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5,192.169.0.6
	* Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	  - env NO_PROXY=192.169.0.5
	  - env NO_PROXY=192.169.0.5,192.169.0.6
	* Verifying Kubernetes components...
	
	* Starting "ha-968000-m04" worker node in "ha-968000" cluster
	* Restarting existing hyperkit VM for "ha-968000-m04" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:08:35.679541    4013 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:08:35.680318    4013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:08:35.680328    4013 out.go:304] Setting ErrFile to fd 2...
	I0805 16:08:35.680346    4013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:08:35.680972    4013 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:08:35.682707    4013 out.go:298] Setting JSON to false
	I0805 16:08:35.706964    4013 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2286,"bootTime":1722897029,"procs":430,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0805 16:08:35.707087    4013 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:08:35.728606    4013 out.go:177] * [ha-968000] minikube v1.33.1 on Darwin 14.5
	I0805 16:08:35.770605    4013 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:08:35.770660    4013 notify.go:220] Checking for updates...
	I0805 16:08:35.813604    4013 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:08:35.834532    4013 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0805 16:08:35.855464    4013 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:08:35.876389    4013 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:08:35.897688    4013 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:08:35.919248    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:08:35.919436    4013 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:08:35.920085    4013 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:08:35.920151    4013 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:08:35.929520    4013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51884
	I0805 16:08:35.929878    4013 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:08:35.930279    4013 main.go:141] libmachine: Using API Version  1
	I0805 16:08:35.930302    4013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:08:35.930554    4013 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:08:35.930686    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:08:35.959618    4013 out.go:177] * Using the hyperkit driver based on existing profile
	I0805 16:08:36.001252    4013 start.go:297] selected driver: hyperkit
	I0805 16:08:36.001281    4013 start.go:901] validating driver "hyperkit" against &{Name:ha-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:ha-968000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:08:36.001519    4013 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:08:36.001702    4013 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:08:36.001927    4013 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19373-1122/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0805 16:08:36.011596    4013 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0805 16:08:36.017027    4013 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:08:36.017051    4013 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0805 16:08:36.020140    4013 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:08:36.020202    4013 cni.go:84] Creating CNI manager for ""
	I0805 16:08:36.020212    4013 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0805 16:08:36.020294    4013 start.go:340] cluster config:
	{Name:ha-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-968000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:08:36.020400    4013 iso.go:125] acquiring lock: {Name:mk71e8d40232ece83c91dc82184f03ab93aee56e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:08:36.062580    4013 out.go:177] * Starting "ha-968000" primary control-plane node in "ha-968000" cluster
	I0805 16:08:36.085413    4013 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:08:36.085486    4013 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0805 16:08:36.085505    4013 cache.go:56] Caching tarball of preloaded images
	I0805 16:08:36.085698    4013 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:08:36.085718    4013 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:08:36.085921    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:08:36.086796    4013 start.go:360] acquireMachinesLock for ha-968000: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:08:36.086915    4013 start.go:364] duration metric: took 94.676µs to acquireMachinesLock for "ha-968000"
	I0805 16:08:36.086955    4013 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:08:36.086972    4013 fix.go:54] fixHost starting: 
	I0805 16:08:36.087391    4013 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:08:36.087423    4013 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:08:36.096218    4013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51886
	I0805 16:08:36.096566    4013 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:08:36.096926    4013 main.go:141] libmachine: Using API Version  1
	I0805 16:08:36.096939    4013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:08:36.097199    4013 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:08:36.097327    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:08:36.097443    4013 main.go:141] libmachine: (ha-968000) Calling .GetState
	I0805 16:08:36.097545    4013 main.go:141] libmachine: (ha-968000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:08:36.097604    4013 main.go:141] libmachine: (ha-968000) DBG | hyperkit pid from json: 3418
	I0805 16:08:36.098523    4013 main.go:141] libmachine: (ha-968000) DBG | hyperkit pid 3418 missing from process table
	I0805 16:08:36.098563    4013 fix.go:112] recreateIfNeeded on ha-968000: state=Stopped err=<nil>
	I0805 16:08:36.098579    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	W0805 16:08:36.098669    4013 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:08:36.140439    4013 out.go:177] * Restarting existing hyperkit VM for "ha-968000" ...
	I0805 16:08:36.161262    4013 main.go:141] libmachine: (ha-968000) Calling .Start
	I0805 16:08:36.161541    4013 main.go:141] libmachine: (ha-968000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:08:36.161569    4013 main.go:141] libmachine: (ha-968000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/hyperkit.pid
	I0805 16:08:36.163159    4013 main.go:141] libmachine: (ha-968000) DBG | hyperkit pid 3418 missing from process table
	I0805 16:08:36.163172    4013 main.go:141] libmachine: (ha-968000) DBG | pid 3418 is in state "Stopped"
	I0805 16:08:36.163189    4013 main.go:141] libmachine: (ha-968000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/hyperkit.pid...
	I0805 16:08:36.163382    4013 main.go:141] libmachine: (ha-968000) DBG | Using UUID a9f347e2-e9fc-4e4f-b87b-350754bafb6d
	I0805 16:08:36.294197    4013 main.go:141] libmachine: (ha-968000) DBG | Generated MAC 3e:79:a8:cb:37:4b
	I0805 16:08:36.294223    4013 main.go:141] libmachine: (ha-968000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000
	I0805 16:08:36.294340    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a9f347e2-e9fc-4e4f-b87b-350754bafb6d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c4780)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:08:36.294368    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a9f347e2-e9fc-4e4f-b87b-350754bafb6d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c4780)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:08:36.294409    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "a9f347e2-e9fc-4e4f-b87b-350754bafb6d", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/ha-968000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000"}
	I0805 16:08:36.294446    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U a9f347e2-e9fc-4e4f-b87b-350754bafb6d -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/ha-968000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000"
	I0805 16:08:36.294464    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:08:36.295966    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 DEBUG: hyperkit: Pid is 4025
	I0805 16:08:36.296384    4013 main.go:141] libmachine: (ha-968000) DBG | Attempt 0
	I0805 16:08:36.296402    4013 main.go:141] libmachine: (ha-968000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:08:36.296476    4013 main.go:141] libmachine: (ha-968000) DBG | hyperkit pid from json: 4025
	I0805 16:08:36.298241    4013 main.go:141] libmachine: (ha-968000) DBG | Searching for 3e:79:a8:cb:37:4b in /var/db/dhcpd_leases ...
	I0805 16:08:36.298320    4013 main.go:141] libmachine: (ha-968000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0805 16:08:36.298334    4013 main.go:141] libmachine: (ha-968000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b15b5a}
	I0805 16:08:36.298341    4013 main.go:141] libmachine: (ha-968000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2acb6}
	I0805 16:08:36.298352    4013 main.go:141] libmachine: (ha-968000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b2ac1c}
	I0805 16:08:36.298378    4013 main.go:141] libmachine: (ha-968000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2ab94}
	I0805 16:08:36.298390    4013 main.go:141] libmachine: (ha-968000) DBG | Found match: 3e:79:a8:cb:37:4b
	I0805 16:08:36.298400    4013 main.go:141] libmachine: (ha-968000) DBG | IP: 192.169.0.5
	I0805 16:08:36.298431    4013 main.go:141] libmachine: (ha-968000) Calling .GetConfigRaw
	I0805 16:08:36.299288    4013 main.go:141] libmachine: (ha-968000) Calling .GetIP
	I0805 16:08:36.299496    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:08:36.299907    4013 machine.go:94] provisionDockerMachine start ...
	I0805 16:08:36.299917    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:08:36.300052    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:36.300161    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:36.300278    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:36.300399    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:36.300504    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:36.300629    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:08:36.300879    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0805 16:08:36.300887    4013 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 16:08:36.304094    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:08:36.358116    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:08:36.358849    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:08:36.358861    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:08:36.358871    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:08:36.358879    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:08:36.744699    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:08:36.744726    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:08:36.859121    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:08:36.859139    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:08:36.859155    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:08:36.859188    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:08:36.860075    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:08:36.860087    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:08:42.442082    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:42 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:08:42.442122    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:42 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:08:42.442133    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:42 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:08:42.468515    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:42 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:08:47.381320    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 16:08:47.381334    4013 main.go:141] libmachine: (ha-968000) Calling .GetMachineName
	I0805 16:08:47.381494    4013 buildroot.go:166] provisioning hostname "ha-968000"
	I0805 16:08:47.381505    4013 main.go:141] libmachine: (ha-968000) Calling .GetMachineName
	I0805 16:08:47.381614    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:47.381731    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:47.381824    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.381916    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.382009    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:47.382131    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:08:47.382292    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0805 16:08:47.382300    4013 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-968000 && echo "ha-968000" | sudo tee /etc/hostname
	I0805 16:08:47.461361    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-968000
	
	I0805 16:08:47.461391    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:47.461523    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:47.461610    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.461697    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.461801    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:47.461927    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:08:47.462076    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0805 16:08:47.462087    4013 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-968000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-968000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-968000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:08:47.534682    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:08:47.534701    4013 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:08:47.534713    4013 buildroot.go:174] setting up certificates
	I0805 16:08:47.534720    4013 provision.go:84] configureAuth start
	I0805 16:08:47.534727    4013 main.go:141] libmachine: (ha-968000) Calling .GetMachineName
	I0805 16:08:47.534861    4013 main.go:141] libmachine: (ha-968000) Calling .GetIP
	I0805 16:08:47.534954    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:47.535056    4013 provision.go:143] copyHostCerts
	I0805 16:08:47.535084    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:08:47.535151    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:08:47.535160    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:08:47.535302    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:08:47.535496    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:08:47.535537    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:08:47.535561    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:08:47.535642    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:08:47.535782    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:08:47.535820    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:08:47.535825    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:08:47.535901    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:08:47.536041    4013 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.ha-968000 san=[127.0.0.1 192.169.0.5 ha-968000 localhost minikube]
	I0805 16:08:47.710785    4013 provision.go:177] copyRemoteCerts
	I0805 16:08:47.710840    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:08:47.710858    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:47.710996    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:47.711136    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.711274    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:47.711374    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/id_rsa Username:docker}
	I0805 16:08:47.750129    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:08:47.750206    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:08:47.771089    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:08:47.771160    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0805 16:08:47.789876    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:08:47.789938    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:08:47.809484    4013 provision.go:87] duration metric: took 274.74692ms to configureAuth
	I0805 16:08:47.809497    4013 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:08:47.809670    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:08:47.809683    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:08:47.809829    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:47.809915    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:47.810002    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.810076    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.810154    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:47.810265    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:08:47.810397    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0805 16:08:47.810405    4013 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:08:47.878284    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:08:47.878296    4013 buildroot.go:70] root file system type: tmpfs
	I0805 16:08:47.878387    4013 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:08:47.878399    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:47.878536    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:47.878623    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.878711    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.878808    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:47.878940    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:08:47.879074    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0805 16:08:47.879122    4013 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:08:47.957253    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:08:47.957278    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:47.957421    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:47.957524    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.957614    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.957714    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:47.957844    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:08:47.957985    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0805 16:08:47.957996    4013 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:08:49.653715    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:08:49.653732    4013 machine.go:97] duration metric: took 13.353812952s to provisionDockerMachine
	I0805 16:08:49.653746    4013 start.go:293] postStartSetup for "ha-968000" (driver="hyperkit")
	I0805 16:08:49.653760    4013 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:08:49.653771    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:08:49.653973    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:08:49.653990    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:49.654090    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:49.654219    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:49.654313    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:49.654396    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/id_rsa Username:docker}
	I0805 16:08:49.695524    4013 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:08:49.698720    4013 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:08:49.698734    4013 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:08:49.698825    4013 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:08:49.699014    4013 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:08:49.699020    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:08:49.699239    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:08:49.707453    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:08:49.726493    4013 start.go:296] duration metric: took 72.739242ms for postStartSetup
	I0805 16:08:49.726518    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:08:49.726678    4013 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0805 16:08:49.726689    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:49.726778    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:49.726859    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:49.726953    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:49.727030    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/id_rsa Username:docker}
	I0805 16:08:49.773612    4013 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0805 16:08:49.773669    4013 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0805 16:08:49.839587    4013 fix.go:56] duration metric: took 13.752613014s for fixHost
	I0805 16:08:49.839610    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:49.839781    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:49.839886    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:49.839982    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:49.840087    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:49.840208    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:08:49.840351    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0805 16:08:49.840358    4013 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0805 16:08:49.909831    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722899330.049194417
	
	I0805 16:08:49.909843    4013 fix.go:216] guest clock: 1722899330.049194417
	I0805 16:08:49.909849    4013 fix.go:229] Guest: 2024-08-05 16:08:50.049194417 -0700 PDT Remote: 2024-08-05 16:08:49.8396 -0700 PDT m=+14.197025337 (delta=209.594417ms)
	I0805 16:08:49.909866    4013 fix.go:200] guest clock delta is within tolerance: 209.594417ms
	I0805 16:08:49.909870    4013 start.go:83] releasing machines lock for "ha-968000", held for 13.822941144s
	I0805 16:08:49.909890    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:08:49.910020    4013 main.go:141] libmachine: (ha-968000) Calling .GetIP
	I0805 16:08:49.910132    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:08:49.910474    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:08:49.910586    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:08:49.910664    4013 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:08:49.910695    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:49.910746    4013 ssh_runner.go:195] Run: cat /version.json
	I0805 16:08:49.910757    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:49.910786    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:49.910854    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:49.910893    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:49.910967    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:49.910992    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:49.911086    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:49.911105    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/id_rsa Username:docker}
	I0805 16:08:49.911177    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/id_rsa Username:docker}
	I0805 16:08:49.948334    4013 ssh_runner.go:195] Run: systemctl --version
	I0805 16:08:49.997557    4013 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 16:08:50.001927    4013 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:08:50.001971    4013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:08:50.014441    4013 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:08:50.014455    4013 start.go:495] detecting cgroup driver to use...
	I0805 16:08:50.014568    4013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:08:50.030880    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:08:50.040000    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:08:50.048917    4013 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:08:50.048956    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:08:50.058052    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:08:50.067040    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:08:50.075877    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:08:50.084739    4013 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:08:50.093910    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:08:50.102684    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:08:50.111468    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:08:50.120485    4013 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:08:50.128670    4013 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:08:50.136701    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:08:50.239872    4013 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:08:50.259056    4013 start.go:495] detecting cgroup driver to use...
	I0805 16:08:50.259134    4013 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:08:50.276716    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:08:50.288092    4013 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:08:50.305475    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:08:50.315851    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:08:50.325889    4013 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:08:50.345027    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:08:50.355226    4013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:08:50.370181    4013 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:08:50.373242    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:08:50.380619    4013 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:08:50.394005    4013 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:08:50.490673    4013 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:08:50.595291    4013 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:08:50.595364    4013 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:08:50.609503    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:08:50.704344    4013 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:08:53.027644    4013 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.323281261s)
	I0805 16:08:53.027701    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0805 16:08:53.038843    4013 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0805 16:08:53.053238    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:08:53.063556    4013 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0805 16:08:53.166406    4013 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0805 16:08:53.281072    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:08:53.386855    4013 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0805 16:08:53.400726    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:08:53.412004    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:08:53.527406    4013 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0805 16:08:53.592203    4013 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0805 16:08:53.592286    4013 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0805 16:08:53.596745    4013 start.go:563] Will wait 60s for crictl version
	I0805 16:08:53.596797    4013 ssh_runner.go:195] Run: which crictl
	I0805 16:08:53.600648    4013 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 16:08:53.626561    4013 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0805 16:08:53.626630    4013 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:08:53.645043    4013 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:08:53.705589    4013 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0805 16:08:53.705632    4013 main.go:141] libmachine: (ha-968000) Calling .GetIP
	I0805 16:08:53.705996    4013 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0805 16:08:53.710588    4013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:08:53.720355    4013 kubeadm.go:883] updating cluster {Name:ha-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:ha-968000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 16:08:53.720443    4013 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:08:53.720494    4013 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:08:53.733778    4013 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240730-75a5af0c
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0805 16:08:53.733792    4013 docker.go:615] Images already preloaded, skipping extraction
	I0805 16:08:53.733871    4013 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:08:53.750560    4013 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240730-75a5af0c
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0805 16:08:53.750581    4013 cache_images.go:84] Images are preloaded, skipping loading
	I0805 16:08:53.750593    4013 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.30.3 docker true true} ...
	I0805 16:08:53.750678    4013 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-968000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-968000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 16:08:53.750747    4013 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0805 16:08:53.787873    4013 cni.go:84] Creating CNI manager for ""
	I0805 16:08:53.787890    4013 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0805 16:08:53.787901    4013 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 16:08:53.787917    4013 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-968000 NodeName:ha-968000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 16:08:53.787998    4013 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-968000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 16:08:53.788013    4013 kube-vip.go:115] generating kube-vip config ...
	I0805 16:08:53.788070    4013 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0805 16:08:53.800656    4013 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0805 16:08:53.800732    4013 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0805 16:08:53.800782    4013 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 16:08:53.809476    4013 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 16:08:53.809517    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0805 16:08:53.816818    4013 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0805 16:08:53.830799    4013 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 16:08:53.844236    4013 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0805 16:08:53.858097    4013 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0805 16:08:53.871426    4013 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0805 16:08:53.874277    4013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:08:53.883655    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:08:53.988496    4013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:08:54.003102    4013 certs.go:68] Setting up /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000 for IP: 192.169.0.5
	I0805 16:08:54.003116    4013 certs.go:194] generating shared ca certs ...
	I0805 16:08:54.003129    4013 certs.go:226] acquiring lock for ca certs: {Name:mkb83e058d89c7d4e66f4136f377a3c305b13735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:08:54.003311    4013 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key
	I0805 16:08:54.003384    4013 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key
	I0805 16:08:54.003396    4013 certs.go:256] generating profile certs ...
	I0805 16:08:54.003511    4013 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/client.key
	I0805 16:08:54.003533    4013 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key.e79882c6
	I0805 16:08:54.003547    4013 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt.e79882c6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0805 16:08:54.115170    4013 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt.e79882c6 ...
	I0805 16:08:54.115186    4013 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt.e79882c6: {Name:mk08e7d67872e7bcbb9c4a5ebb3c1a0585610c24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:08:54.115545    4013 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key.e79882c6 ...
	I0805 16:08:54.115555    4013 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key.e79882c6: {Name:mk05314b1c47ab3f7e3ebdc93ec7e7e8886a1b84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:08:54.115785    4013 certs.go:381] copying /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt.e79882c6 -> /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt
	I0805 16:08:54.116009    4013 certs.go:385] copying /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key.e79882c6 -> /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key
	I0805 16:08:54.116270    4013 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.key
	I0805 16:08:54.116285    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 16:08:54.116311    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 16:08:54.116333    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 16:08:54.116355    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 16:08:54.116375    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 16:08:54.116396    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 16:08:54.116416    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 16:08:54.116436    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 16:08:54.116538    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem (1338 bytes)
	W0805 16:08:54.116595    4013 certs.go:480] ignoring /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678_empty.pem, impossibly tiny 0 bytes
	I0805 16:08:54.116605    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 16:08:54.116642    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem (1082 bytes)
	I0805 16:08:54.116678    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem (1123 bytes)
	I0805 16:08:54.116714    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem (1675 bytes)
	I0805 16:08:54.116792    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:08:54.116828    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem -> /usr/share/ca-certificates/1678.pem
	I0805 16:08:54.116855    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /usr/share/ca-certificates/16782.pem
	I0805 16:08:54.116877    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:08:54.117335    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 16:08:54.150739    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 16:08:54.186504    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 16:08:54.226561    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0805 16:08:54.269928    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0805 16:08:54.303048    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 16:08:54.323374    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 16:08:54.342974    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 16:08:54.363396    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem --> /usr/share/ca-certificates/1678.pem (1338 bytes)
	I0805 16:08:54.383241    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /usr/share/ca-certificates/16782.pem (1708 bytes)
	I0805 16:08:54.402950    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 16:08:54.422603    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 16:08:54.436211    4013 ssh_runner.go:195] Run: openssl version
	I0805 16:08:54.440410    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1678.pem && ln -fs /usr/share/ca-certificates/1678.pem /etc/ssl/certs/1678.pem"
	I0805 16:08:54.448686    4013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1678.pem
	I0805 16:08:54.452045    4013 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 22:58 /usr/share/ca-certificates/1678.pem
	I0805 16:08:54.452085    4013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1678.pem
	I0805 16:08:54.456273    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1678.pem /etc/ssl/certs/51391683.0"
	I0805 16:08:54.464533    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16782.pem && ln -fs /usr/share/ca-certificates/16782.pem /etc/ssl/certs/16782.pem"
	I0805 16:08:54.472739    4013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16782.pem
	I0805 16:08:54.476114    4013 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 22:58 /usr/share/ca-certificates/16782.pem
	I0805 16:08:54.476150    4013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16782.pem
	I0805 16:08:54.480401    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16782.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 16:08:54.488643    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 16:08:54.496792    4013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:08:54.500141    4013 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:08:54.500183    4013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:08:54.504411    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 16:08:54.512563    4013 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 16:08:54.516172    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 16:08:54.520959    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 16:08:54.525326    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 16:08:54.530085    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 16:08:54.534367    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 16:08:54.538835    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 16:08:54.543179    4013 kubeadm.go:392] StartCluster: {Name:ha-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:ha-968000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:08:54.543300    4013 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 16:08:54.556340    4013 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 16:08:54.563823    4013 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 16:08:54.563834    4013 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 16:08:54.563876    4013 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 16:08:54.571534    4013 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 16:08:54.571871    4013 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-968000" does not appear in /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:08:54.571963    4013 kubeconfig.go:62] /Users/jenkins/minikube-integration/19373-1122/kubeconfig needs updating (will repair): [kubeconfig missing "ha-968000" cluster setting kubeconfig missing "ha-968000" context setting]
	I0805 16:08:54.572632    4013 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/kubeconfig: {Name:mk2a0d8b4d330b3c26432fc65d015ddf98a9cc93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:08:54.573442    4013 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:08:54.573629    4013 kapi.go:59] client config for ha-968000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x85c5060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:08:54.573946    4013 cert_rotation.go:137] Starting client certificate rotation controller
	I0805 16:08:54.574116    4013 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 16:08:54.581700    4013 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0805 16:08:54.581717    4013 kubeadm.go:597] duration metric: took 17.878919ms to restartPrimaryControlPlane
	I0805 16:08:54.581733    4013 kubeadm.go:394] duration metric: took 38.554869ms to StartCluster
	I0805 16:08:54.581748    4013 settings.go:142] acquiring lock: {Name:mk564a817a54ecf2aef16a4d2309e85208c0231f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:08:54.581853    4013 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:08:54.582215    4013 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/kubeconfig: {Name:mk2a0d8b4d330b3c26432fc65d015ddf98a9cc93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:08:54.582428    4013 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:08:54.582441    4013 start.go:241] waiting for startup goroutines ...
	I0805 16:08:54.582452    4013 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 16:08:54.582577    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:08:54.626035    4013 out.go:177] * Enabled addons: 
	I0805 16:08:54.646951    4013 addons.go:510] duration metric: took 64.498286ms for enable addons: enabled=[]
	I0805 16:08:54.646991    4013 start.go:246] waiting for cluster config update ...
	I0805 16:08:54.647007    4013 start.go:255] writing updated cluster config ...
	I0805 16:08:54.669067    4013 out.go:177] 
	I0805 16:08:54.690499    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:08:54.690643    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:08:54.713097    4013 out.go:177] * Starting "ha-968000-m02" control-plane node in "ha-968000" cluster
	I0805 16:08:54.754948    4013 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:08:54.755014    4013 cache.go:56] Caching tarball of preloaded images
	I0805 16:08:54.755180    4013 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:08:54.755198    4013 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:08:54.755327    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:08:54.756294    4013 start.go:360] acquireMachinesLock for ha-968000-m02: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:08:54.756399    4013 start.go:364] duration metric: took 80.734µs to acquireMachinesLock for "ha-968000-m02"
	I0805 16:08:54.756425    4013 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:08:54.756433    4013 fix.go:54] fixHost starting: m02
	I0805 16:08:54.756872    4013 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:08:54.756903    4013 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:08:54.766304    4013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51908
	I0805 16:08:54.766655    4013 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:08:54.766978    4013 main.go:141] libmachine: Using API Version  1
	I0805 16:08:54.766996    4013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:08:54.767193    4013 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:08:54.767300    4013 main.go:141] libmachine: (ha-968000-m02) Calling .DriverName
	I0805 16:08:54.767383    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetState
	I0805 16:08:54.767464    4013 main.go:141] libmachine: (ha-968000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:08:54.767541    4013 main.go:141] libmachine: (ha-968000-m02) DBG | hyperkit pid from json: 3958
	I0805 16:08:54.768456    4013 main.go:141] libmachine: (ha-968000-m02) DBG | hyperkit pid 3958 missing from process table
	I0805 16:08:54.768475    4013 fix.go:112] recreateIfNeeded on ha-968000-m02: state=Stopped err=<nil>
	I0805 16:08:54.768483    4013 main.go:141] libmachine: (ha-968000-m02) Calling .DriverName
	W0805 16:08:54.768562    4013 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:08:54.811088    4013 out.go:177] * Restarting existing hyperkit VM for "ha-968000-m02" ...
	I0805 16:08:54.832129    4013 main.go:141] libmachine: (ha-968000-m02) Calling .Start
	I0805 16:08:54.832449    4013 main.go:141] libmachine: (ha-968000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:08:54.832594    4013 main.go:141] libmachine: (ha-968000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/hyperkit.pid
	I0805 16:08:54.834273    4013 main.go:141] libmachine: (ha-968000-m02) DBG | hyperkit pid 3958 missing from process table
	I0805 16:08:54.834290    4013 main.go:141] libmachine: (ha-968000-m02) DBG | pid 3958 is in state "Stopped"
	I0805 16:08:54.834314    4013 main.go:141] libmachine: (ha-968000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/hyperkit.pid...
	I0805 16:08:54.834555    4013 main.go:141] libmachine: (ha-968000-m02) DBG | Using UUID fe2b7178-e807-4f71-b597-390ca402ab71
	I0805 16:08:54.862624    4013 main.go:141] libmachine: (ha-968000-m02) DBG | Generated MAC b2:64:5d:40:b:b5
	I0805 16:08:54.862655    4013 main.go:141] libmachine: (ha-968000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000
	I0805 16:08:54.862830    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"fe2b7178-e807-4f71-b597-390ca402ab71", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aaa20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:08:54.862873    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"fe2b7178-e807-4f71-b597-390ca402ab71", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aaa20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:08:54.862907    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "fe2b7178-e807-4f71-b597-390ca402ab71", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/ha-968000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machine
s/ha-968000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000"}
	I0805 16:08:54.862951    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U fe2b7178-e807-4f71-b597-390ca402ab71 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/ha-968000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000"
	I0805 16:08:54.862972    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:08:54.864230    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 DEBUG: hyperkit: Pid is 4036
	I0805 16:08:54.864617    4013 main.go:141] libmachine: (ha-968000-m02) DBG | Attempt 0
	I0805 16:08:54.864628    4013 main.go:141] libmachine: (ha-968000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:08:54.864712    4013 main.go:141] libmachine: (ha-968000-m02) DBG | hyperkit pid from json: 4036
	I0805 16:08:54.866673    4013 main.go:141] libmachine: (ha-968000-m02) DBG | Searching for b2:64:5d:40:b:b5 in /var/db/dhcpd_leases ...
	I0805 16:08:54.866730    4013 main.go:141] libmachine: (ha-968000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0805 16:08:54.866746    4013 main.go:141] libmachine: (ha-968000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2acfd}
	I0805 16:08:54.866756    4013 main.go:141] libmachine: (ha-968000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b15b5a}
	I0805 16:08:54.866763    4013 main.go:141] libmachine: (ha-968000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2acb6}
	I0805 16:08:54.866779    4013 main.go:141] libmachine: (ha-968000-m02) DBG | Found match: b2:64:5d:40:b:b5
	I0805 16:08:54.866785    4013 main.go:141] libmachine: (ha-968000-m02) DBG | IP: 192.169.0.6
	I0805 16:08:54.866826    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetConfigRaw
	I0805 16:08:54.867497    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetIP
	I0805 16:08:54.867687    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:08:54.868091    4013 machine.go:94] provisionDockerMachine start ...
	I0805 16:08:54.868103    4013 main.go:141] libmachine: (ha-968000-m02) Calling .DriverName
	I0805 16:08:54.868265    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:08:54.868366    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:08:54.868470    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:08:54.868561    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:08:54.868654    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:08:54.868809    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:08:54.868963    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0805 16:08:54.868973    4013 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 16:08:54.872068    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:08:54.880205    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:08:54.881201    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:08:54.881214    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:08:54.881243    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:08:54.881257    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:08:55.265892    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:55 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:08:55.265907    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:55 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:08:55.380667    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:08:55.380687    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:08:55.380695    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:08:55.380701    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:08:55.381533    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:55 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:08:55.381546    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:55 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:09:00.973735    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:09:00 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:09:00.973856    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:09:00 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:09:00.973866    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:09:00 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:09:00.997819    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:09:00 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:09:05.931816    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 16:09:05.931831    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetMachineName
	I0805 16:09:05.931997    4013 buildroot.go:166] provisioning hostname "ha-968000-m02"
	I0805 16:09:05.932009    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetMachineName
	I0805 16:09:05.932102    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:05.932202    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:05.932286    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:05.932365    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:05.932456    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:05.932575    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:09:05.932721    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0805 16:09:05.932729    4013 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-968000-m02 && echo "ha-968000-m02" | sudo tee /etc/hostname
	I0805 16:09:05.993192    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-968000-m02
	
	I0805 16:09:05.993215    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:05.993338    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:05.993436    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:05.993511    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:05.993594    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:05.993723    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:09:05.993859    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0805 16:09:05.993871    4013 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-968000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-968000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-968000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:09:06.050566    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:09:06.050581    4013 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:09:06.050591    4013 buildroot.go:174] setting up certificates
	I0805 16:09:06.050596    4013 provision.go:84] configureAuth start
	I0805 16:09:06.050603    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetMachineName
	I0805 16:09:06.050733    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetIP
	I0805 16:09:06.050844    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:06.050935    4013 provision.go:143] copyHostCerts
	I0805 16:09:06.050963    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:09:06.051010    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:09:06.051016    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:09:06.051159    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:09:06.051373    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:09:06.051403    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:09:06.051408    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:09:06.051520    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:09:06.051663    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:09:06.051692    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:09:06.051697    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:09:06.051762    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:09:06.051905    4013 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.ha-968000-m02 san=[127.0.0.1 192.169.0.6 ha-968000-m02 localhost minikube]
	I0805 16:09:06.144117    4013 provision.go:177] copyRemoteCerts
	I0805 16:09:06.144168    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:09:06.144182    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:06.144315    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:06.144419    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:06.144519    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:06.144605    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/id_rsa Username:docker}
	I0805 16:09:06.177583    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:09:06.177652    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:09:06.196674    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:09:06.196731    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:09:06.215833    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:09:06.215904    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0805 16:09:06.234708    4013 provision.go:87] duration metric: took 184.105335ms to configureAuth
	I0805 16:09:06.234721    4013 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:09:06.234888    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:09:06.234902    4013 main.go:141] libmachine: (ha-968000-m02) Calling .DriverName
	I0805 16:09:06.235034    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:06.235129    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:06.235219    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:06.235306    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:06.235377    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:06.235486    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:09:06.235620    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0805 16:09:06.235627    4013 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:09:06.286203    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:09:06.286215    4013 buildroot.go:70] root file system type: tmpfs
	I0805 16:09:06.286297    4013 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:09:06.286308    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:06.286429    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:06.286523    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:06.286613    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:06.286698    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:06.286817    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:09:06.286956    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0805 16:09:06.287002    4013 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:09:06.347900    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:09:06.347916    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:06.348060    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:06.348168    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:06.348290    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:06.348380    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:06.348531    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:09:06.348709    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0805 16:09:06.348724    4013 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:09:07.986428    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:09:07.986451    4013 machine.go:97] duration metric: took 13.118346339s to provisionDockerMachine
	I0805 16:09:07.986459    4013 start.go:293] postStartSetup for "ha-968000-m02" (driver="hyperkit")
	I0805 16:09:07.986469    4013 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:09:07.986480    4013 main.go:141] libmachine: (ha-968000-m02) Calling .DriverName
	I0805 16:09:07.986670    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:09:07.986681    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:07.986783    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:07.986882    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:07.986962    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:07.987053    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/id_rsa Username:docker}
	I0805 16:09:08.025708    4013 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:09:08.030674    4013 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:09:08.030690    4013 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:09:08.030788    4013 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:09:08.030933    4013 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:09:08.030940    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:09:08.031094    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:09:08.040549    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:09:08.073731    4013 start.go:296] duration metric: took 87.255709ms for postStartSetup
	I0805 16:09:08.073758    4013 main.go:141] libmachine: (ha-968000-m02) Calling .DriverName
	I0805 16:09:08.073944    4013 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0805 16:09:08.073958    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:08.074051    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:08.074132    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:08.074215    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:08.074303    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/id_rsa Username:docker}
	I0805 16:09:08.106482    4013 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0805 16:09:08.106540    4013 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0805 16:09:08.160338    4013 fix.go:56] duration metric: took 13.403896455s for fixHost
	I0805 16:09:08.160384    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:08.160527    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:08.160625    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:08.160714    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:08.160794    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:08.160927    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:09:08.161086    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0805 16:09:08.161094    4013 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0805 16:09:08.212458    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722899348.353849181
	
	I0805 16:09:08.212468    4013 fix.go:216] guest clock: 1722899348.353849181
	I0805 16:09:08.212476    4013 fix.go:229] Guest: 2024-08-05 16:09:08.353849181 -0700 PDT Remote: 2024-08-05 16:09:08.160354 -0700 PDT m=+32.517773342 (delta=193.495181ms)
	I0805 16:09:08.212487    4013 fix.go:200] guest clock delta is within tolerance: 193.495181ms
	I0805 16:09:08.212490    4013 start.go:83] releasing machines lock for "ha-968000-m02", held for 13.45607681s
	I0805 16:09:08.212505    4013 main.go:141] libmachine: (ha-968000-m02) Calling .DriverName
	I0805 16:09:08.212639    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetIP
	I0805 16:09:08.235368    4013 out.go:177] * Found network options:
	I0805 16:09:08.255968    4013 out.go:177]   - NO_PROXY=192.169.0.5
	W0805 16:09:08.277055    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:09:08.277126    4013 main.go:141] libmachine: (ha-968000-m02) Calling .DriverName
	I0805 16:09:08.277962    4013 main.go:141] libmachine: (ha-968000-m02) Calling .DriverName
	I0805 16:09:08.278232    4013 main.go:141] libmachine: (ha-968000-m02) Calling .DriverName
	I0805 16:09:08.278363    4013 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:09:08.278403    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	W0805 16:09:08.278441    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:09:08.278542    4013 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 16:09:08.278561    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:08.278609    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:08.278735    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:08.278828    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:08.278924    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:08.279039    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:08.279094    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:08.279296    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/id_rsa Username:docker}
	I0805 16:09:08.279328    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/id_rsa Username:docker}
	W0805 16:09:08.308476    4013 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:09:08.308543    4013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:09:08.366966    4013 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:09:08.366989    4013 start.go:495] detecting cgroup driver to use...
	I0805 16:09:08.367106    4013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:09:08.383096    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:09:08.391318    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:09:08.399437    4013 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:09:08.399485    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:09:08.407713    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:09:08.415945    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:09:08.424060    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:09:08.432199    4013 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:09:08.440635    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:09:08.449476    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:09:08.457693    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:09:08.465963    4013 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:09:08.473316    4013 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:09:08.480715    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:09:08.580965    4013 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:09:08.599460    4013 start.go:495] detecting cgroup driver to use...
	I0805 16:09:08.599526    4013 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:09:08.618244    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:09:08.628953    4013 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:09:08.643835    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:09:08.654207    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:09:08.667243    4013 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:09:08.688662    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:09:08.699359    4013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:09:08.714408    4013 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:09:08.717488    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:09:08.724576    4013 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:09:08.738058    4013 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:09:08.841454    4013 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:09:08.945955    4013 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:09:08.945979    4013 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:09:08.960827    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:09:09.064765    4013 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:09:11.412428    4013 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.347643222s)
	I0805 16:09:11.412491    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0805 16:09:11.422964    4013 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0805 16:09:11.435663    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:09:11.446013    4013 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0805 16:09:11.539337    4013 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0805 16:09:11.650058    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:09:11.748634    4013 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0805 16:09:11.762213    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:09:11.773039    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:09:11.872006    4013 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0805 16:09:11.939388    4013 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0805 16:09:11.939480    4013 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0805 16:09:11.943952    4013 start.go:563] Will wait 60s for crictl version
	I0805 16:09:11.944006    4013 ssh_runner.go:195] Run: which crictl
	I0805 16:09:11.947391    4013 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 16:09:11.980231    4013 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0805 16:09:11.980302    4013 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:09:11.997853    4013 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:09:12.060154    4013 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0805 16:09:12.080904    4013 out.go:177]   - env NO_PROXY=192.169.0.5
	I0805 16:09:12.102334    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetIP
	I0805 16:09:12.102720    4013 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0805 16:09:12.107517    4013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:09:12.117349    4013 mustload.go:65] Loading cluster: ha-968000
	I0805 16:09:12.117532    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:09:12.117765    4013 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:09:12.117781    4013 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:09:12.126279    4013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51930
	I0805 16:09:12.126593    4013 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:09:12.126941    4013 main.go:141] libmachine: Using API Version  1
	I0805 16:09:12.126959    4013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:09:12.127183    4013 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:09:12.127284    4013 main.go:141] libmachine: (ha-968000) Calling .GetState
	I0805 16:09:12.127369    4013 main.go:141] libmachine: (ha-968000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:09:12.127424    4013 main.go:141] libmachine: (ha-968000) DBG | hyperkit pid from json: 4025
	I0805 16:09:12.128374    4013 host.go:66] Checking if "ha-968000" exists ...
	I0805 16:09:12.128663    4013 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:09:12.128678    4013 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:09:12.137093    4013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51932
	I0805 16:09:12.137400    4013 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:09:12.137721    4013 main.go:141] libmachine: Using API Version  1
	I0805 16:09:12.137731    4013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:09:12.137942    4013 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:09:12.138052    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:09:12.138149    4013 certs.go:68] Setting up /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000 for IP: 192.169.0.6
	I0805 16:09:12.138156    4013 certs.go:194] generating shared ca certs ...
	I0805 16:09:12.138169    4013 certs.go:226] acquiring lock for ca certs: {Name:mkb83e058d89c7d4e66f4136f377a3c305b13735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:09:12.138309    4013 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key
	I0805 16:09:12.138365    4013 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key
	I0805 16:09:12.138373    4013 certs.go:256] generating profile certs ...
	I0805 16:09:12.138477    4013 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/client.key
	I0805 16:09:12.138565    4013 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key.77dc068d
	I0805 16:09:12.138631    4013 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.key
	I0805 16:09:12.138639    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 16:09:12.138660    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 16:09:12.138681    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 16:09:12.138700    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 16:09:12.138717    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 16:09:12.138735    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 16:09:12.138754    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 16:09:12.138776    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 16:09:12.138855    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem (1338 bytes)
	W0805 16:09:12.138895    4013 certs.go:480] ignoring /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678_empty.pem, impossibly tiny 0 bytes
	I0805 16:09:12.138904    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 16:09:12.138940    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem (1082 bytes)
	I0805 16:09:12.138974    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem (1123 bytes)
	I0805 16:09:12.139009    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem (1675 bytes)
	I0805 16:09:12.139074    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:09:12.139106    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:09:12.139125    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem -> /usr/share/ca-certificates/1678.pem
	I0805 16:09:12.139142    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /usr/share/ca-certificates/16782.pem
	I0805 16:09:12.139167    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:09:12.139259    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:09:12.139346    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:09:12.139430    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:09:12.139498    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/id_rsa Username:docker}
	I0805 16:09:12.171916    4013 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0805 16:09:12.175290    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0805 16:09:12.184095    4013 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0805 16:09:12.187128    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0805 16:09:12.195868    4013 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0805 16:09:12.198915    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0805 16:09:12.208072    4013 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0805 16:09:12.211239    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0805 16:09:12.220236    4013 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0805 16:09:12.223357    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0805 16:09:12.231812    4013 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0805 16:09:12.234916    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0805 16:09:12.243760    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 16:09:12.264594    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 16:09:12.284204    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 16:09:12.304172    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0805 16:09:12.324282    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0805 16:09:12.344243    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 16:09:12.363682    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 16:09:12.383391    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 16:09:12.403042    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 16:09:12.422963    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem --> /usr/share/ca-certificates/1678.pem (1338 bytes)
	I0805 16:09:12.442422    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /usr/share/ca-certificates/16782.pem (1708 bytes)
	I0805 16:09:12.462071    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0805 16:09:12.476035    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0805 16:09:12.489609    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0805 16:09:12.502965    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0805 16:09:12.516617    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0805 16:09:12.530178    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0805 16:09:12.543803    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0805 16:09:12.557186    4013 ssh_runner.go:195] Run: openssl version
	I0805 16:09:12.561690    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1678.pem && ln -fs /usr/share/ca-certificates/1678.pem /etc/ssl/certs/1678.pem"
	I0805 16:09:12.570469    4013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1678.pem
	I0805 16:09:12.573916    4013 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 22:58 /usr/share/ca-certificates/1678.pem
	I0805 16:09:12.573968    4013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1678.pem
	I0805 16:09:12.578325    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1678.pem /etc/ssl/certs/51391683.0"
	I0805 16:09:12.586655    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16782.pem && ln -fs /usr/share/ca-certificates/16782.pem /etc/ssl/certs/16782.pem"
	I0805 16:09:12.595266    4013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16782.pem
	I0805 16:09:12.598773    4013 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 22:58 /usr/share/ca-certificates/16782.pem
	I0805 16:09:12.598808    4013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16782.pem
	I0805 16:09:12.603106    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16782.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 16:09:12.611770    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 16:09:12.620276    4013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:09:12.623836    4013 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:09:12.623874    4013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:09:12.628099    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 16:09:12.636558    4013 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 16:09:12.640104    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 16:09:12.644367    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 16:09:12.648558    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 16:09:12.653002    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 16:09:12.657413    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 16:09:12.661571    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 16:09:12.665817    4013 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.30.3 docker true true} ...
	I0805 16:09:12.665880    4013 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-968000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-968000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 16:09:12.665898    4013 kube-vip.go:115] generating kube-vip config ...
	I0805 16:09:12.665932    4013 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0805 16:09:12.678633    4013 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0805 16:09:12.678672    4013 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0805 16:09:12.678725    4013 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 16:09:12.686682    4013 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 16:09:12.686732    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0805 16:09:12.694235    4013 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0805 16:09:12.708178    4013 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 16:09:12.721592    4013 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0805 16:09:12.735241    4013 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0805 16:09:12.738251    4013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:09:12.747938    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:09:12.839333    4013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:09:12.855307    4013 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:09:12.855486    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:09:12.876653    4013 out.go:177] * Verifying Kubernetes components...
	I0805 16:09:12.918406    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:09:13.043139    4013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:09:13.061746    4013 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:09:13.061950    4013 kapi.go:59] client config for ha-968000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x85c5060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0805 16:09:13.061990    4013 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0805 16:09:13.062163    4013 node_ready.go:35] waiting up to 6m0s for node "ha-968000-m02" to be "Ready" ...
	I0805 16:09:13.062248    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:09:13.062253    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:13.062261    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:13.062265    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.259366    4013 round_trippers.go:574] Response Status: 200 OK in 8197 milliseconds
	I0805 16:09:21.260575    4013 node_ready.go:49] node "ha-968000-m02" has status "Ready":"True"
	I0805 16:09:21.260589    4013 node_ready.go:38] duration metric: took 8.198406493s for node "ha-968000-m02" to be "Ready" ...
	I0805 16:09:21.260596    4013 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:09:21.260646    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0805 16:09:21.260653    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.260660    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.260665    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.302891    4013 round_trippers.go:574] Response Status: 200 OK in 42 milliseconds
	I0805 16:09:21.310518    4013 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hjp5z" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.310596    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hjp5z
	I0805 16:09:21.310619    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.310632    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.310639    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.313152    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:21.313881    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:21.313892    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.313899    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.313902    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.317700    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:21.318187    4013 pod_ready.go:92] pod "coredns-7db6d8ff4d-hjp5z" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:21.318198    4013 pod_ready.go:81] duration metric: took 7.662792ms for pod "coredns-7db6d8ff4d-hjp5z" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.318207    4013 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.318250    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:09:21.318256    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.318263    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.318268    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.326180    4013 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0805 16:09:21.326741    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:21.326750    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.326758    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.326763    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.331849    4013 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0805 16:09:21.332344    4013 pod_ready.go:92] pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:21.332356    4013 pod_ready.go:81] duration metric: took 14.143254ms for pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.332364    4013 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.332409    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-968000
	I0805 16:09:21.332416    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.332423    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.332426    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.335622    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:21.335995    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:21.336004    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.336019    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.336025    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.339965    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:21.340276    4013 pod_ready.go:92] pod "etcd-ha-968000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:21.340287    4013 pod_ready.go:81] duration metric: took 7.918315ms for pod "etcd-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.340295    4013 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.340346    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-968000-m02
	I0805 16:09:21.340352    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.340359    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.340365    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.342503    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:21.343015    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:09:21.343024    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.343031    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.343036    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.346019    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:21.346517    4013 pod_ready.go:92] pod "etcd-ha-968000-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:21.346530    4013 pod_ready.go:81] duration metric: took 6.229187ms for pod "etcd-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.346558    4013 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.346618    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-968000-m03
	I0805 16:09:21.346625    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.346633    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.346638    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.351435    4013 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 16:09:21.461654    4013 request.go:629] Waited for 109.640417ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:09:21.461696    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:09:21.461703    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.461709    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.461715    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.465496    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:21.465774    4013 pod_ready.go:92] pod "etcd-ha-968000-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:21.465784    4013 pod_ready.go:81] duration metric: took 119.216409ms for pod "etcd-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.465817    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.661090    4013 request.go:629] Waited for 195.188408ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-968000
	I0805 16:09:21.661122    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-968000
	I0805 16:09:21.661127    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.661133    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.661136    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.663700    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:21.860705    4013 request.go:629] Waited for 196.382714ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:21.860744    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:21.860750    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.860758    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.860764    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.864103    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:21.864428    4013 pod_ready.go:92] pod "kube-apiserver-ha-968000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:21.864438    4013 pod_ready.go:81] duration metric: took 398.612841ms for pod "kube-apiserver-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.864448    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:22.062331    4013 request.go:629] Waited for 197.82051ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-968000-m02
	I0805 16:09:22.062511    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-968000-m02
	I0805 16:09:22.062523    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:22.062533    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:22.062539    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:22.065766    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:22.262057    4013 request.go:629] Waited for 195.681075ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:09:22.262125    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:09:22.262130    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:22.262137    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:22.262140    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:22.264946    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:22.265310    4013 pod_ready.go:92] pod "kube-apiserver-ha-968000-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:22.265318    4013 pod_ready.go:81] duration metric: took 400.862554ms for pod "kube-apiserver-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:22.265325    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:22.460707    4013 request.go:629] Waited for 195.347101ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-968000-m03
	I0805 16:09:22.460759    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-968000-m03
	I0805 16:09:22.460765    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:22.460781    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:22.460785    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:22.464130    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:22.660697    4013 request.go:629] Waited for 196.193657ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:09:22.660729    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:09:22.660736    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:22.660779    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:22.660812    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:22.662931    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:22.663458    4013 pod_ready.go:92] pod "kube-apiserver-ha-968000-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:22.663468    4013 pod_ready.go:81] duration metric: took 398.13793ms for pod "kube-apiserver-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:22.663475    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:22.861064    4013 request.go:629] Waited for 197.549417ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000
	I0805 16:09:22.861116    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000
	I0805 16:09:22.861124    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:22.861131    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:22.861137    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:22.863357    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:23.060775    4013 request.go:629] Waited for 196.997441ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:23.060838    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:23.060844    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:23.060850    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:23.060854    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:23.062638    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:09:23.062947    4013 pod_ready.go:92] pod "kube-controller-manager-ha-968000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:23.062956    4013 pod_ready.go:81] duration metric: took 399.47493ms for pod "kube-controller-manager-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:23.062963    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:23.262182    4013 request.go:629] Waited for 199.175443ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m02
	I0805 16:09:23.262278    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m02
	I0805 16:09:23.262289    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:23.262301    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:23.262309    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:23.265274    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:23.460721    4013 request.go:629] Waited for 194.890215ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:09:23.460750    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:09:23.460755    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:23.460761    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:23.460766    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:23.462860    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:23.463267    4013 pod_ready.go:97] node "ha-968000-m02" hosting pod "kube-controller-manager-ha-968000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-968000-m02" has status "Ready":"False"
	I0805 16:09:23.463277    4013 pod_ready.go:81] duration metric: took 400.308105ms for pod "kube-controller-manager-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	E0805 16:09:23.463284    4013 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-968000-m02" hosting pod "kube-controller-manager-ha-968000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-968000-m02" has status "Ready":"False"
	I0805 16:09:23.463290    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:23.662538    4013 request.go:629] Waited for 199.207212ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m03
	I0805 16:09:23.662619    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m03
	I0805 16:09:23.662625    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:23.662631    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:23.662635    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:23.664768    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:23.861796    4013 request.go:629] Waited for 196.439694ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:09:23.861935    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:09:23.861946    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:23.861956    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:23.861962    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:23.865458    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:23.865815    4013 pod_ready.go:92] pod "kube-controller-manager-ha-968000-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:23.865826    4013 pod_ready.go:81] duration metric: took 402.529289ms for pod "kube-controller-manager-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:23.865833    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fvd5q" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:24.061409    4013 request.go:629] Waited for 195.531329ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvd5q
	I0805 16:09:24.061446    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvd5q
	I0805 16:09:24.061452    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:24.061491    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:24.061496    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:24.063747    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:24.261469    4013 request.go:629] Waited for 197.298268ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:09:24.261565    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:09:24.261573    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:24.261581    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:24.261587    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:24.264861    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:24.265277    4013 pod_ready.go:97] node "ha-968000-m02" hosting pod "kube-proxy-fvd5q" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-968000-m02" has status "Ready":"False"
	I0805 16:09:24.265288    4013 pod_ready.go:81] duration metric: took 399.450273ms for pod "kube-proxy-fvd5q" in "kube-system" namespace to be "Ready" ...
	E0805 16:09:24.265296    4013 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-968000-m02" hosting pod "kube-proxy-fvd5q" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-968000-m02" has status "Ready":"False"
	I0805 16:09:24.265301    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p4xgk" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:24.461481    4013 request.go:629] Waited for 196.027245ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p4xgk
	I0805 16:09:24.461559    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p4xgk
	I0805 16:09:24.461578    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:24.461590    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:24.461596    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:24.464886    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:24.661858    4013 request.go:629] Waited for 196.151825ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:09:24.662024    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:09:24.662034    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:24.662044    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:24.662050    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:24.665229    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:24.665765    4013 pod_ready.go:92] pod "kube-proxy-p4xgk" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:24.665774    4013 pod_ready.go:81] duration metric: took 400.467773ms for pod "kube-proxy-p4xgk" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:24.665781    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qptt6" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:24.861504    4013 request.go:629] Waited for 195.677553ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qptt6
	I0805 16:09:24.861566    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qptt6
	I0805 16:09:24.861577    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:24.861588    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:24.861595    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:24.865839    4013 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 16:09:25.061918    4013 request.go:629] Waited for 195.700422ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m04
	I0805 16:09:25.061988    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m04
	I0805 16:09:25.061994    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:25.062000    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:25.062004    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:25.063765    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:09:25.064046    4013 pod_ready.go:92] pod "kube-proxy-qptt6" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:25.064056    4013 pod_ready.go:81] duration metric: took 398.270559ms for pod "kube-proxy-qptt6" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:25.064065    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v87jb" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:25.261506    4013 request.go:629] Waited for 197.352793ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v87jb
	I0805 16:09:25.261554    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v87jb
	I0805 16:09:25.261563    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:25.261573    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:25.261582    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:25.264807    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:25.461565    4013 request.go:629] Waited for 196.17837ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:25.461605    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:25.461613    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:25.461621    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:25.461625    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:25.464575    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:25.464951    4013 pod_ready.go:92] pod "kube-proxy-v87jb" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:25.464960    4013 pod_ready.go:81] duration metric: took 400.887094ms for pod "kube-proxy-v87jb" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:25.464982    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:25.662277    4013 request.go:629] Waited for 197.19961ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000
	I0805 16:09:25.662316    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000
	I0805 16:09:25.662325    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:25.662333    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:25.662339    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:25.664596    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:25.861101    4013 request.go:629] Waited for 196.140125ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:25.861136    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:25.861142    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:25.861149    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:25.861155    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:25.863555    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:25.863937    4013 pod_ready.go:92] pod "kube-scheduler-ha-968000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:25.863947    4013 pod_ready.go:81] duration metric: took 398.956028ms for pod "kube-scheduler-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:25.863960    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:26.061952    4013 request.go:629] Waited for 197.955177ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000-m02
	I0805 16:09:26.062048    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000-m02
	I0805 16:09:26.062057    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:26.062065    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:26.062070    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:26.064556    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:26.262140    4013 request.go:629] Waited for 197.126449ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:09:26.262175    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:09:26.262180    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:26.262186    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:26.262190    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:26.264203    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:26.264592    4013 pod_ready.go:97] node "ha-968000-m02" hosting pod "kube-scheduler-ha-968000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-968000-m02" has status "Ready":"False"
	I0805 16:09:26.264603    4013 pod_ready.go:81] duration metric: took 400.638133ms for pod "kube-scheduler-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	E0805 16:09:26.264611    4013 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-968000-m02" hosting pod "kube-scheduler-ha-968000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-968000-m02" has status "Ready":"False"
	I0805 16:09:26.264615    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:26.461402    4013 request.go:629] Waited for 196.72911ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000-m03
	I0805 16:09:26.461551    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000-m03
	I0805 16:09:26.461563    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:26.461573    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:26.461580    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:26.465124    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:26.661745    4013 request.go:629] Waited for 196.148221ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:09:26.661836    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:09:26.661842    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:26.661848    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:26.661852    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:26.663931    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:26.664273    4013 pod_ready.go:92] pod "kube-scheduler-ha-968000-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:26.664282    4013 pod_ready.go:81] duration metric: took 399.661598ms for pod "kube-scheduler-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:26.664289    4013 pod_ready.go:38] duration metric: took 5.403682263s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:09:26.664305    4013 api_server.go:52] waiting for apiserver process to appear ...
	I0805 16:09:26.664365    4013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:09:26.676043    4013 api_server.go:72] duration metric: took 13.820707254s to wait for apiserver process to appear ...
	I0805 16:09:26.676055    4013 api_server.go:88] waiting for apiserver healthz status ...
	I0805 16:09:26.676075    4013 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0805 16:09:26.679244    4013 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0805 16:09:26.679280    4013 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0805 16:09:26.679287    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:26.679294    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:26.679298    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:26.679920    4013 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:09:26.680031    4013 api_server.go:141] control plane version: v1.30.3
	I0805 16:09:26.680044    4013 api_server.go:131] duration metric: took 3.983266ms to wait for apiserver health ...
	I0805 16:09:26.680049    4013 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 16:09:26.861214    4013 request.go:629] Waited for 181.081617ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0805 16:09:26.861259    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0805 16:09:26.861267    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:26.861278    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:26.861307    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:26.876137    4013 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0805 16:09:26.882111    4013 system_pods.go:59] 26 kube-system pods found
	I0805 16:09:26.882136    4013 system_pods.go:61] "coredns-7db6d8ff4d-hjp5z" [e31fd97b-2727-4db3-a17c-3302c320832b] Running
	I0805 16:09:26.882140    4013 system_pods.go:61] "coredns-7db6d8ff4d-mfzln" [ea5c136e-84a6-4253-8f61-85c427b83840] Running
	I0805 16:09:26.882143    4013 system_pods.go:61] "etcd-ha-968000" [24590478-199e-4d78-8312-3d5924d6e915] Running
	I0805 16:09:26.882146    4013 system_pods.go:61] "etcd-ha-968000-m02" [cefe6f5a-3a87-4ccf-9419-0b864275c9c9] Running
	I0805 16:09:26.882149    4013 system_pods.go:61] "etcd-ha-968000-m03" [ec752887-5a12-4888-ba88-3fb5d54c6ce7] Running
	I0805 16:09:26.882151    4013 system_pods.go:61] "kindnet-5dshm" [2641d2a9-a26a-4cbe-b8ea-99ed7c7af43c] Running
	I0805 16:09:26.882153    4013 system_pods.go:61] "kindnet-cglm9" [80a5d2ca-3d9f-4347-bb68-cd6eac4e4aa8] Running
	I0805 16:09:26.882156    4013 system_pods.go:61] "kindnet-fp5ns" [bf9c4454-9491-4a21-8f0a-6c6f21919551] Running
	I0805 16:09:26.882158    4013 system_pods.go:61] "kindnet-qh6l6" [382ac149-5a4e-4fe4-aaaa-9c929c93b101] Running
	I0805 16:09:26.882161    4013 system_pods.go:61] "kube-apiserver-ha-968000" [04e9a721-eb6e-47b4-a7f0-2cad1ee201f7] Running
	I0805 16:09:26.882164    4013 system_pods.go:61] "kube-apiserver-ha-968000-m02" [0465a825-6697-4a98-bb88-18df7929a5dd] Running
	I0805 16:09:26.882166    4013 system_pods.go:61] "kube-apiserver-ha-968000-m03" [a0d3fc83-9820-463e-81bb-2abcb1b4c868] Running
	I0805 16:09:26.882169    4013 system_pods.go:61] "kube-controller-manager-ha-968000" [2078d070-21b4-4d47-a4d3-b130fa8b3aaf] Running
	I0805 16:09:26.882171    4013 system_pods.go:61] "kube-controller-manager-ha-968000-m02" [f0a1cc06-05bb-4efa-9a53-ebccba2b5f9e] Running
	I0805 16:09:26.882174    4013 system_pods.go:61] "kube-controller-manager-ha-968000-m03" [d140abba-93f2-4062-8ee8-3918ff5ae882] Running
	I0805 16:09:26.882176    4013 system_pods.go:61] "kube-proxy-fvd5q" [f2f13535-5802-4a1c-8243-48de42b79e74] Running
	I0805 16:09:26.882179    4013 system_pods.go:61] "kube-proxy-p4xgk" [aaca6036-f95c-44fb-a358-5ac881148fa4] Running
	I0805 16:09:26.882182    4013 system_pods.go:61] "kube-proxy-qptt6" [a826a636-1d05-4cca-a56d-d25a9cf41506] Running
	I0805 16:09:26.882184    4013 system_pods.go:61] "kube-proxy-v87jb" [d98f61ac-3a61-452c-8507-7258a9703c15] Running
	I0805 16:09:26.882188    4013 system_pods.go:61] "kube-scheduler-ha-968000" [20bf4b5e-71a1-4708-bb6a-34b0e44f196d] Running
	I0805 16:09:26.882190    4013 system_pods.go:61] "kube-scheduler-ha-968000-m02" [e590d5bf-9517-433b-9759-5b0f16cfe9a9] Running
	I0805 16:09:26.882193    4013 system_pods.go:61] "kube-scheduler-ha-968000-m03" [91120005-f0b0-47d5-a91c-c06b12e6da3e] Running
	I0805 16:09:26.882197    4013 system_pods.go:61] "kube-vip-ha-968000" [373808d0-e9f2-4cea-a7b6-98b309fac6e7] Running
	I0805 16:09:26.882201    4013 system_pods.go:61] "kube-vip-ha-968000-m02" [713fc36a-5582-464c-82d3-02905c81b753] Running
	I0805 16:09:26.882204    4013 system_pods.go:61] "kube-vip-ha-968000-m03" [d94a7e1c-9ddd-4229-b4cd-ac05384dd20a] Running
	I0805 16:09:26.882207    4013 system_pods.go:61] "storage-provisioner" [52e2952a-756d-4f65-84f5-588cb6563297] Running
	I0805 16:09:26.882211    4013 system_pods.go:74] duration metric: took 202.157859ms to wait for pod list to return data ...
	I0805 16:09:26.882216    4013 default_sa.go:34] waiting for default service account to be created ...
	I0805 16:09:27.061417    4013 request.go:629] Waited for 179.110016ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:09:27.061534    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:09:27.061546    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:27.061557    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:27.061563    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:27.065177    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:27.065383    4013 default_sa.go:45] found service account: "default"
	I0805 16:09:27.065396    4013 default_sa.go:55] duration metric: took 183.174105ms for default service account to be created ...
	I0805 16:09:27.065406    4013 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 16:09:27.262565    4013 request.go:629] Waited for 197.034728ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0805 16:09:27.262625    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0805 16:09:27.262635    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:27.262646    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:27.262654    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:27.268433    4013 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0805 16:09:27.273328    4013 system_pods.go:86] 26 kube-system pods found
	I0805 16:09:27.273339    4013 system_pods.go:89] "coredns-7db6d8ff4d-hjp5z" [e31fd97b-2727-4db3-a17c-3302c320832b] Running
	I0805 16:09:27.273344    4013 system_pods.go:89] "coredns-7db6d8ff4d-mfzln" [ea5c136e-84a6-4253-8f61-85c427b83840] Running
	I0805 16:09:27.273348    4013 system_pods.go:89] "etcd-ha-968000" [24590478-199e-4d78-8312-3d5924d6e915] Running
	I0805 16:09:27.273351    4013 system_pods.go:89] "etcd-ha-968000-m02" [cefe6f5a-3a87-4ccf-9419-0b864275c9c9] Running
	I0805 16:09:27.273354    4013 system_pods.go:89] "etcd-ha-968000-m03" [ec752887-5a12-4888-ba88-3fb5d54c6ce7] Running
	I0805 16:09:27.273358    4013 system_pods.go:89] "kindnet-5dshm" [2641d2a9-a26a-4cbe-b8ea-99ed7c7af43c] Running
	I0805 16:09:27.273361    4013 system_pods.go:89] "kindnet-cglm9" [80a5d2ca-3d9f-4347-bb68-cd6eac4e4aa8] Running
	I0805 16:09:27.273365    4013 system_pods.go:89] "kindnet-fp5ns" [bf9c4454-9491-4a21-8f0a-6c6f21919551] Running
	I0805 16:09:27.273369    4013 system_pods.go:89] "kindnet-qh6l6" [382ac149-5a4e-4fe4-aaaa-9c929c93b101] Running
	I0805 16:09:27.273372    4013 system_pods.go:89] "kube-apiserver-ha-968000" [04e9a721-eb6e-47b4-a7f0-2cad1ee201f7] Running
	I0805 16:09:27.273376    4013 system_pods.go:89] "kube-apiserver-ha-968000-m02" [0465a825-6697-4a98-bb88-18df7929a5dd] Running
	I0805 16:09:27.273380    4013 system_pods.go:89] "kube-apiserver-ha-968000-m03" [a0d3fc83-9820-463e-81bb-2abcb1b4c868] Running
	I0805 16:09:27.273383    4013 system_pods.go:89] "kube-controller-manager-ha-968000" [2078d070-21b4-4d47-a4d3-b130fa8b3aaf] Running
	I0805 16:09:27.273387    4013 system_pods.go:89] "kube-controller-manager-ha-968000-m02" [f0a1cc06-05bb-4efa-9a53-ebccba2b5f9e] Running
	I0805 16:09:27.273393    4013 system_pods.go:89] "kube-controller-manager-ha-968000-m03" [d140abba-93f2-4062-8ee8-3918ff5ae882] Running
	I0805 16:09:27.273398    4013 system_pods.go:89] "kube-proxy-fvd5q" [f2f13535-5802-4a1c-8243-48de42b79e74] Running
	I0805 16:09:27.273401    4013 system_pods.go:89] "kube-proxy-p4xgk" [aaca6036-f95c-44fb-a358-5ac881148fa4] Running
	I0805 16:09:27.273408    4013 system_pods.go:89] "kube-proxy-qptt6" [a826a636-1d05-4cca-a56d-d25a9cf41506] Running
	I0805 16:09:27.273412    4013 system_pods.go:89] "kube-proxy-v87jb" [d98f61ac-3a61-452c-8507-7258a9703c15] Running
	I0805 16:09:27.273415    4013 system_pods.go:89] "kube-scheduler-ha-968000" [20bf4b5e-71a1-4708-bb6a-34b0e44f196d] Running
	I0805 16:09:27.273419    4013 system_pods.go:89] "kube-scheduler-ha-968000-m02" [e590d5bf-9517-433b-9759-5b0f16cfe9a9] Running
	I0805 16:09:27.273422    4013 system_pods.go:89] "kube-scheduler-ha-968000-m03" [91120005-f0b0-47d5-a91c-c06b12e6da3e] Running
	I0805 16:09:27.273426    4013 system_pods.go:89] "kube-vip-ha-968000" [373808d0-e9f2-4cea-a7b6-98b309fac6e7] Running
	I0805 16:09:27.273429    4013 system_pods.go:89] "kube-vip-ha-968000-m02" [713fc36a-5582-464c-82d3-02905c81b753] Running
	I0805 16:09:27.273433    4013 system_pods.go:89] "kube-vip-ha-968000-m03" [d94a7e1c-9ddd-4229-b4cd-ac05384dd20a] Running
	I0805 16:09:27.273450    4013 system_pods.go:89] "storage-provisioner" [52e2952a-756d-4f65-84f5-588cb6563297] Running
	I0805 16:09:27.273458    4013 system_pods.go:126] duration metric: took 208.046004ms to wait for k8s-apps to be running ...
	I0805 16:09:27.273468    4013 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 16:09:27.273520    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:09:27.285035    4013 system_svc.go:56] duration metric: took 11.567511ms WaitForService to wait for kubelet
	I0805 16:09:27.285048    4013 kubeadm.go:582] duration metric: took 14.42971445s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:09:27.285060    4013 node_conditions.go:102] verifying NodePressure condition ...
	I0805 16:09:27.461886    4013 request.go:629] Waited for 176.780844ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0805 16:09:27.461995    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0805 16:09:27.462013    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:27.462026    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:27.462035    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:27.465297    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:27.466219    4013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:09:27.466232    4013 node_conditions.go:123] node cpu capacity is 2
	I0805 16:09:27.466242    4013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:09:27.466246    4013 node_conditions.go:123] node cpu capacity is 2
	I0805 16:09:27.466249    4013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:09:27.466253    4013 node_conditions.go:123] node cpu capacity is 2
	I0805 16:09:27.466256    4013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:09:27.466259    4013 node_conditions.go:123] node cpu capacity is 2
	I0805 16:09:27.466262    4013 node_conditions.go:105] duration metric: took 181.199284ms to run NodePressure ...
	I0805 16:09:27.466271    4013 start.go:241] waiting for startup goroutines ...
	I0805 16:09:27.466288    4013 start.go:255] writing updated cluster config ...
	I0805 16:09:27.488716    4013 out.go:177] 
	I0805 16:09:27.508938    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:09:27.509085    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:09:27.531540    4013 out.go:177] * Starting "ha-968000-m03" control-plane node in "ha-968000" cluster
	I0805 16:09:27.573486    4013 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:09:27.573507    4013 cache.go:56] Caching tarball of preloaded images
	I0805 16:09:27.573613    4013 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:09:27.573623    4013 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:09:27.573701    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:09:27.574588    4013 start.go:360] acquireMachinesLock for ha-968000-m03: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:09:27.574644    4013 start.go:364] duration metric: took 42.919µs to acquireMachinesLock for "ha-968000-m03"
	I0805 16:09:27.574659    4013 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:09:27.574662    4013 fix.go:54] fixHost starting: m03
	I0805 16:09:27.574910    4013 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:09:27.574930    4013 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:09:27.583789    4013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51937
	I0805 16:09:27.584141    4013 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:09:27.584476    4013 main.go:141] libmachine: Using API Version  1
	I0805 16:09:27.584490    4013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:09:27.584707    4013 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:09:27.584816    4013 main.go:141] libmachine: (ha-968000-m03) Calling .DriverName
	I0805 16:09:27.584907    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetState
	I0805 16:09:27.584990    4013 main.go:141] libmachine: (ha-968000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:09:27.585071    4013 main.go:141] libmachine: (ha-968000-m03) DBG | hyperkit pid from json: 3471
	I0805 16:09:27.585977    4013 main.go:141] libmachine: (ha-968000-m03) DBG | hyperkit pid 3471 missing from process table
	I0805 16:09:27.585998    4013 fix.go:112] recreateIfNeeded on ha-968000-m03: state=Stopped err=<nil>
	I0805 16:09:27.586006    4013 main.go:141] libmachine: (ha-968000-m03) Calling .DriverName
	W0805 16:09:27.586083    4013 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:09:27.606653    4013 out.go:177] * Restarting existing hyperkit VM for "ha-968000-m03" ...
	I0805 16:09:27.648666    4013 main.go:141] libmachine: (ha-968000-m03) Calling .Start
	I0805 16:09:27.648869    4013 main.go:141] libmachine: (ha-968000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:09:27.648916    4013 main.go:141] libmachine: (ha-968000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/hyperkit.pid
	I0805 16:09:27.650524    4013 main.go:141] libmachine: (ha-968000-m03) DBG | hyperkit pid 3471 missing from process table
	I0805 16:09:27.650545    4013 main.go:141] libmachine: (ha-968000-m03) DBG | pid 3471 is in state "Stopped"
	I0805 16:09:27.650562    4013 main.go:141] libmachine: (ha-968000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/hyperkit.pid...
	I0805 16:09:27.650769    4013 main.go:141] libmachine: (ha-968000-m03) DBG | Using UUID 2e5bd4cb-7666-4039-8bdc-5eded2ad114e
	I0805 16:09:27.679630    4013 main.go:141] libmachine: (ha-968000-m03) DBG | Generated MAC 5e:e5:6c:f1:60:ca
	I0805 16:09:27.679657    4013 main.go:141] libmachine: (ha-968000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000
	I0805 16:09:27.679792    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2e5bd4cb-7666-4039-8bdc-5eded2ad114e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003acae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:09:27.679833    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2e5bd4cb-7666-4039-8bdc-5eded2ad114e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003acae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:09:27.679876    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2e5bd4cb-7666-4039-8bdc-5eded2ad114e", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/ha-968000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machine
s/ha-968000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000"}
	I0805 16:09:27.679918    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2e5bd4cb-7666-4039-8bdc-5eded2ad114e -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/ha-968000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000"
	I0805 16:09:27.679930    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:09:27.681441    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 DEBUG: hyperkit: Pid is 4050
	I0805 16:09:27.681855    4013 main.go:141] libmachine: (ha-968000-m03) DBG | Attempt 0
	I0805 16:09:27.681870    4013 main.go:141] libmachine: (ha-968000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:09:27.681942    4013 main.go:141] libmachine: (ha-968000-m03) DBG | hyperkit pid from json: 4050
	I0805 16:09:27.684086    4013 main.go:141] libmachine: (ha-968000-m03) DBG | Searching for 5e:e5:6c:f1:60:ca in /var/db/dhcpd_leases ...
	I0805 16:09:27.684171    4013 main.go:141] libmachine: (ha-968000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0805 16:09:27.684192    4013 main.go:141] libmachine: (ha-968000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:09:27.684213    4013 main.go:141] libmachine: (ha-968000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2acfd}
	I0805 16:09:27.684223    4013 main.go:141] libmachine: (ha-968000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b15b5a}
	I0805 16:09:27.684257    4013 main.go:141] libmachine: (ha-968000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b2ac1c}
	I0805 16:09:27.684275    4013 main.go:141] libmachine: (ha-968000-m03) DBG | Found match: 5e:e5:6c:f1:60:ca
	I0805 16:09:27.684281    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetConfigRaw
	I0805 16:09:27.684302    4013 main.go:141] libmachine: (ha-968000-m03) DBG | IP: 192.169.0.7
	I0805 16:09:27.684999    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetIP
	I0805 16:09:27.685240    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:09:27.685658    4013 machine.go:94] provisionDockerMachine start ...
	I0805 16:09:27.685674    4013 main.go:141] libmachine: (ha-968000-m03) Calling .DriverName
	I0805 16:09:27.685796    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:09:27.685888    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:09:27.685972    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:09:27.686054    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:09:27.686136    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:09:27.686243    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:09:27.686399    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0805 16:09:27.686406    4013 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 16:09:27.689026    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:09:27.697927    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:09:27.698811    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:09:27.698833    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:09:27.698857    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:09:27.698876    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:09:28.083003    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:09:28.083019    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:09:28.198118    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:09:28.198136    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:09:28.198156    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:09:28.198170    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:09:28.198987    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:09:28.198999    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:09:33.906297    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:33 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:09:33.906335    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:33 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:09:33.906345    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:33 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:09:33.929592    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:33 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:10:02.753110    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 16:10:02.753128    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetMachineName
	I0805 16:10:02.753270    4013 buildroot.go:166] provisioning hostname "ha-968000-m03"
	I0805 16:10:02.753282    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetMachineName
	I0805 16:10:02.753381    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:02.753472    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:02.753543    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:02.753631    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:02.753716    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:02.753836    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:10:02.753997    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0805 16:10:02.754006    4013 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-968000-m03 && echo "ha-968000-m03" | sudo tee /etc/hostname
	I0805 16:10:02.815926    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-968000-m03
	
	I0805 16:10:02.815941    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:02.816075    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:02.816178    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:02.816265    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:02.816353    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:02.816497    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:10:02.816655    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0805 16:10:02.816667    4013 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-968000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-968000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-968000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:10:02.874015    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:10:02.874031    4013 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:10:02.874040    4013 buildroot.go:174] setting up certificates
	I0805 16:10:02.874046    4013 provision.go:84] configureAuth start
	I0805 16:10:02.874053    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetMachineName
	I0805 16:10:02.874189    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetIP
	I0805 16:10:02.874289    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:02.874374    4013 provision.go:143] copyHostCerts
	I0805 16:10:02.874402    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:10:02.874450    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:10:02.874455    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:10:02.874582    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:10:02.874781    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:10:02.874825    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:10:02.874830    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:10:02.874901    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:10:02.875047    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:10:02.875075    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:10:02.875079    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:10:02.875146    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:10:02.875295    4013 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.ha-968000-m03 san=[127.0.0.1 192.169.0.7 ha-968000-m03 localhost minikube]
	I0805 16:10:03.100424    4013 provision.go:177] copyRemoteCerts
	I0805 16:10:03.100475    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:10:03.100489    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:03.100628    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:03.100734    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:03.100820    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:03.100908    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/id_rsa Username:docker}
	I0805 16:10:03.133644    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:10:03.133711    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:10:03.152881    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:10:03.152956    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0805 16:10:03.172153    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:10:03.172226    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 16:10:03.192347    4013 provision.go:87] duration metric: took 318.292468ms to configureAuth
	I0805 16:10:03.192362    4013 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:10:03.192542    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:10:03.192555    4013 main.go:141] libmachine: (ha-968000-m03) Calling .DriverName
	I0805 16:10:03.192694    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:03.192785    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:03.192880    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:03.192966    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:03.193041    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:03.193164    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:10:03.193316    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0805 16:10:03.193325    4013 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:10:03.244032    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:10:03.244045    4013 buildroot.go:70] root file system type: tmpfs
	I0805 16:10:03.244123    4013 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:10:03.244135    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:03.244259    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:03.244342    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:03.244429    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:03.244514    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:03.244643    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:10:03.244779    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0805 16:10:03.244826    4013 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:10:03.306704    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:10:03.306723    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:03.306859    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:03.306950    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:03.307037    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:03.307124    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:03.307256    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:10:03.307400    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0805 16:10:03.307414    4013 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:10:04.932560    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:10:04.932575    4013 machine.go:97] duration metric: took 37.246896971s to provisionDockerMachine
	I0805 16:10:04.932584    4013 start.go:293] postStartSetup for "ha-968000-m03" (driver="hyperkit")
	I0805 16:10:04.932592    4013 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:10:04.932606    4013 main.go:141] libmachine: (ha-968000-m03) Calling .DriverName
	I0805 16:10:04.932806    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:10:04.932820    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:04.932921    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:04.933017    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:04.933114    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:04.933199    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/id_rsa Username:docker}
	I0805 16:10:04.965742    4013 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:10:04.968779    4013 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:10:04.968789    4013 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:10:04.968872    4013 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:10:04.969009    4013 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:10:04.969015    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:10:04.969171    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:10:04.977326    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:10:04.996442    4013 start.go:296] duration metric: took 63.849242ms for postStartSetup
	I0805 16:10:04.996464    4013 main.go:141] libmachine: (ha-968000-m03) Calling .DriverName
	I0805 16:10:04.996645    4013 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0805 16:10:04.996658    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:04.996749    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:04.996835    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:04.996919    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:04.996988    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/id_rsa Username:docker}
	I0805 16:10:05.029923    4013 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0805 16:10:05.029990    4013 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0805 16:10:05.062439    4013 fix.go:56] duration metric: took 37.48776057s for fixHost
	I0805 16:10:05.062463    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:05.062605    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:05.062687    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:05.062782    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:05.062875    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:05.062995    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:10:05.063135    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0805 16:10:05.063142    4013 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0805 16:10:05.114144    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722899405.020487015
	
	I0805 16:10:05.114159    4013 fix.go:216] guest clock: 1722899405.020487015
	I0805 16:10:05.114164    4013 fix.go:229] Guest: 2024-08-05 16:10:05.020487015 -0700 PDT Remote: 2024-08-05 16:10:05.062453 -0700 PDT m=+89.419854401 (delta=-41.965985ms)
	I0805 16:10:05.114175    4013 fix.go:200] guest clock delta is within tolerance: -41.965985ms
	I0805 16:10:05.114179    4013 start.go:83] releasing machines lock for "ha-968000-m03", held for 37.53951612s
	I0805 16:10:05.114196    4013 main.go:141] libmachine: (ha-968000-m03) Calling .DriverName
	I0805 16:10:05.114320    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetIP
	I0805 16:10:05.154856    4013 out.go:177] * Found network options:
	I0805 16:10:05.196438    4013 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0805 16:10:05.217521    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	W0805 16:10:05.217542    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:10:05.217557    4013 main.go:141] libmachine: (ha-968000-m03) Calling .DriverName
	I0805 16:10:05.218022    4013 main.go:141] libmachine: (ha-968000-m03) Calling .DriverName
	I0805 16:10:05.218155    4013 main.go:141] libmachine: (ha-968000-m03) Calling .DriverName
	I0805 16:10:05.218244    4013 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:10:05.218267    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	W0805 16:10:05.218289    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	W0805 16:10:05.218305    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:10:05.218380    4013 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 16:10:05.218396    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:05.218397    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:05.218547    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:05.218562    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:05.218682    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:05.218701    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:05.218796    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/id_rsa Username:docker}
	I0805 16:10:05.218817    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:05.218922    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/id_rsa Username:docker}
	W0805 16:10:05.247739    4013 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:10:05.247807    4013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:10:05.295633    4013 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:10:05.295651    4013 start.go:495] detecting cgroup driver to use...
	I0805 16:10:05.295736    4013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:10:05.311187    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:10:05.320167    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:10:05.328956    4013 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:10:05.329006    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:10:05.337987    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:10:05.346989    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:10:05.356292    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:10:05.365468    4013 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:10:05.374794    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:10:05.383659    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:10:05.392613    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:10:05.401497    4013 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:10:05.409761    4013 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:10:05.417735    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:10:05.522068    4013 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:10:05.541086    4013 start.go:495] detecting cgroup driver to use...
	I0805 16:10:05.541154    4013 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:10:05.560931    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:10:05.572370    4013 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:10:05.590083    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:10:05.601381    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:10:05.612999    4013 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:10:05.640303    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:10:05.651924    4013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:10:05.666834    4013 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:10:05.669785    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:10:05.677888    4013 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:10:05.691535    4013 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:10:05.794601    4013 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:10:05.896489    4013 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:10:05.896516    4013 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:10:05.916844    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:10:06.013180    4013 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:10:08.281931    4013 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.2687312s)
	I0805 16:10:08.281998    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0805 16:10:08.292879    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:10:08.303134    4013 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0805 16:10:08.403828    4013 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0805 16:10:08.520343    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:10:08.633419    4013 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0805 16:10:08.648137    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:10:08.659447    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:10:08.754463    4013 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0805 16:10:08.821178    4013 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0805 16:10:08.821256    4013 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0805 16:10:08.825268    4013 start.go:563] Will wait 60s for crictl version
	I0805 16:10:08.825311    4013 ssh_runner.go:195] Run: which crictl
	I0805 16:10:08.828380    4013 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 16:10:08.856405    4013 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0805 16:10:08.856477    4013 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:10:08.873070    4013 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:10:08.917245    4013 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0805 16:10:08.958050    4013 out.go:177]   - env NO_PROXY=192.169.0.5
	I0805 16:10:08.978959    4013 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0805 16:10:08.999958    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetIP
	I0805 16:10:09.000163    4013 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0805 16:10:09.003143    4013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:10:09.012521    4013 mustload.go:65] Loading cluster: ha-968000
	I0805 16:10:09.012700    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:10:09.012919    4013 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:10:09.012941    4013 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:10:09.021950    4013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51959
	I0805 16:10:09.022290    4013 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:10:09.022650    4013 main.go:141] libmachine: Using API Version  1
	I0805 16:10:09.022672    4013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:10:09.022912    4013 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:10:09.023042    4013 main.go:141] libmachine: (ha-968000) Calling .GetState
	I0805 16:10:09.023120    4013 main.go:141] libmachine: (ha-968000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:10:09.023210    4013 main.go:141] libmachine: (ha-968000) DBG | hyperkit pid from json: 4025
	I0805 16:10:09.024146    4013 host.go:66] Checking if "ha-968000" exists ...
	I0805 16:10:09.024412    4013 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:10:09.024436    4013 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:10:09.033094    4013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51961
	I0805 16:10:09.033420    4013 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:10:09.033772    4013 main.go:141] libmachine: Using API Version  1
	I0805 16:10:09.033792    4013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:10:09.034017    4013 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:10:09.034135    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:10:09.034227    4013 certs.go:68] Setting up /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000 for IP: 192.169.0.7
	I0805 16:10:09.034233    4013 certs.go:194] generating shared ca certs ...
	I0805 16:10:09.034246    4013 certs.go:226] acquiring lock for ca certs: {Name:mkb83e058d89c7d4e66f4136f377a3c305b13735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:10:09.034388    4013 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key
	I0805 16:10:09.034442    4013 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key
	I0805 16:10:09.034452    4013 certs.go:256] generating profile certs ...
	I0805 16:10:09.034546    4013 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/client.key
	I0805 16:10:09.034648    4013 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key.526236ea
	I0805 16:10:09.034697    4013 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.key
	I0805 16:10:09.034704    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 16:10:09.034725    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 16:10:09.034745    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 16:10:09.034764    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 16:10:09.034786    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 16:10:09.034809    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 16:10:09.034828    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 16:10:09.034845    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 16:10:09.034929    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem (1338 bytes)
	W0805 16:10:09.034968    4013 certs.go:480] ignoring /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678_empty.pem, impossibly tiny 0 bytes
	I0805 16:10:09.034982    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 16:10:09.035017    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem (1082 bytes)
	I0805 16:10:09.035050    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem (1123 bytes)
	I0805 16:10:09.035079    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem (1675 bytes)
	I0805 16:10:09.035147    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:10:09.035187    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:10:09.035213    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem -> /usr/share/ca-certificates/1678.pem
	I0805 16:10:09.035232    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /usr/share/ca-certificates/16782.pem
	I0805 16:10:09.035261    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:10:09.035348    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:10:09.035432    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:10:09.035523    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:10:09.035597    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/id_rsa Username:docker}
	I0805 16:10:09.068818    4013 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0805 16:10:09.072729    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0805 16:10:09.083911    4013 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0805 16:10:09.087068    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0805 16:10:09.096135    4013 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0805 16:10:09.099562    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0805 16:10:09.109334    4013 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0805 16:10:09.112743    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0805 16:10:09.122244    4013 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0805 16:10:09.125580    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0805 16:10:09.134471    4013 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0805 16:10:09.137936    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0805 16:10:09.147798    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 16:10:09.168268    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 16:10:09.188512    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 16:10:09.208613    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0805 16:10:09.229102    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0805 16:10:09.248927    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 16:10:09.269438    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 16:10:09.289326    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 16:10:09.309414    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 16:10:09.329327    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem --> /usr/share/ca-certificates/1678.pem (1338 bytes)
	I0805 16:10:09.349275    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /usr/share/ca-certificates/16782.pem (1708 bytes)
	I0805 16:10:09.369465    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0805 16:10:09.383270    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0805 16:10:09.397217    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0805 16:10:09.410973    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0805 16:10:09.424636    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0805 16:10:09.438657    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0805 16:10:09.453241    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0805 16:10:09.467220    4013 ssh_runner.go:195] Run: openssl version
	I0805 16:10:09.471496    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 16:10:09.479975    4013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:10:09.483494    4013 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:10:09.483535    4013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:10:09.487639    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 16:10:09.496028    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1678.pem && ln -fs /usr/share/ca-certificates/1678.pem /etc/ssl/certs/1678.pem"
	I0805 16:10:09.504248    4013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1678.pem
	I0805 16:10:09.507546    4013 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 22:58 /usr/share/ca-certificates/1678.pem
	I0805 16:10:09.507582    4013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1678.pem
	I0805 16:10:09.511833    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1678.pem /etc/ssl/certs/51391683.0"
	I0805 16:10:09.520110    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16782.pem && ln -fs /usr/share/ca-certificates/16782.pem /etc/ssl/certs/16782.pem"
	I0805 16:10:09.528467    4013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16782.pem
	I0805 16:10:09.531788    4013 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 22:58 /usr/share/ca-certificates/16782.pem
	I0805 16:10:09.531831    4013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16782.pem
	I0805 16:10:09.536023    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16782.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 16:10:09.544245    4013 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 16:10:09.547794    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 16:10:09.552109    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 16:10:09.556303    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 16:10:09.560442    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 16:10:09.564725    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 16:10:09.569207    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 16:10:09.573628    4013 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.30.3 docker true true} ...
	I0805 16:10:09.573688    4013 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-968000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-968000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 16:10:09.573706    4013 kube-vip.go:115] generating kube-vip config ...
	I0805 16:10:09.573746    4013 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0805 16:10:09.586333    4013 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0805 16:10:09.586392    4013 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0805 16:10:09.586454    4013 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 16:10:09.595015    4013 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 16:10:09.595072    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0805 16:10:09.604755    4013 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0805 16:10:09.618293    4013 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 16:10:09.632089    4013 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0805 16:10:09.645814    4013 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0805 16:10:09.648794    4013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:10:09.658221    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:10:09.755214    4013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:10:09.770035    4013 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:10:09.770231    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:10:09.791589    4013 out.go:177] * Verifying Kubernetes components...
	I0805 16:10:09.812147    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:10:09.922409    4013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:10:09.937680    4013 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:10:09.937905    4013 kapi.go:59] client config for ha-968000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x85c5060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0805 16:10:09.937943    4013 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0805 16:10:09.938123    4013 node_ready.go:35] waiting up to 6m0s for node "ha-968000-m03" to be "Ready" ...
	I0805 16:10:09.938166    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:09.938171    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:09.938177    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:09.938184    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:09.940537    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:09.940846    4013 node_ready.go:49] node "ha-968000-m03" has status "Ready":"True"
	I0805 16:10:09.940856    4013 node_ready.go:38] duration metric: took 2.724361ms for node "ha-968000-m03" to be "Ready" ...
	I0805 16:10:09.940863    4013 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:10:09.940900    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0805 16:10:09.940905    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:09.940911    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:09.940915    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:09.945944    4013 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0805 16:10:09.953862    4013 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hjp5z" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:09.953919    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hjp5z
	I0805 16:10:09.953924    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:09.953930    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:09.953934    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:09.956348    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:09.956979    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:09.956988    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:09.956994    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:09.956998    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:09.959221    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:09.959622    4013 pod_ready.go:92] pod "coredns-7db6d8ff4d-hjp5z" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:09.959632    4013 pod_ready.go:81] duration metric: took 5.75325ms for pod "coredns-7db6d8ff4d-hjp5z" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:09.959646    4013 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:09.959683    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:09.959688    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:09.959693    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:09.959697    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:09.961820    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:09.962245    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:09.962252    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:09.962258    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:09.962262    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:09.964245    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:10.460326    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:10.460341    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:10.460347    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:10.460351    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:10.462931    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:10.463525    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:10.463534    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:10.463540    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:10.463545    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:10.465741    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:10.960459    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:10.960479    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:10.960487    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:10.960490    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:10.964999    4013 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 16:10:10.965521    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:10.965531    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:10.965538    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:10.965541    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:10.968401    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:11.459862    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:11.459879    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:11.459888    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:11.459896    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:11.462705    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:11.463338    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:11.463348    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:11.463355    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:11.463359    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:11.465847    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:11.960724    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:11.960741    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:11.960748    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:11.960751    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:11.963442    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:11.963893    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:11.963902    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:11.963909    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:11.963915    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:11.966015    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:11.966351    4013 pod_ready.go:102] pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace has status "Ready":"False"
	I0805 16:10:12.460750    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:12.460767    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:12.460775    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:12.460780    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:12.463726    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:12.464380    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:12.464390    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:12.464397    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:12.464403    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:12.466771    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:12.959777    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:12.959794    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:12.959800    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:12.959803    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:12.963016    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:12.963521    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:12.963530    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:12.963537    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:12.963541    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:12.965964    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:13.461027    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:13.461044    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:13.461052    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:13.461056    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:13.463804    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:13.464772    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:13.464781    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:13.464789    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:13.464792    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:13.467029    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:13.961022    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:13.961082    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:13.961090    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:13.961093    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:13.963530    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:13.964018    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:13.964026    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:13.964037    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:13.964040    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:13.966396    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:13.966704    4013 pod_ready.go:102] pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace has status "Ready":"False"
	I0805 16:10:14.460972    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:14.461029    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:14.461037    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:14.461040    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:14.463269    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:14.463827    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:14.463834    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:14.463840    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:14.463844    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:14.465651    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:14.960796    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:14.960810    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:14.960817    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:14.960821    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:14.963503    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:14.964069    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:14.964076    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:14.964082    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:14.964085    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:14.965973    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:15.460976    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:15.461042    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:15.461054    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:15.461062    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:15.464639    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:15.465242    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:15.465250    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:15.465255    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:15.465259    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:15.467095    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:15.960558    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:15.960569    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:15.960575    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:15.960579    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:15.962733    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:15.963261    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:15.963268    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:15.963274    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:15.963278    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:15.964836    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:16.460120    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:16.460142    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:16.460150    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:16.460154    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:16.462634    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:16.463246    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:16.463254    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:16.463260    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:16.463264    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:16.464841    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:16.465283    4013 pod_ready.go:102] pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace has status "Ready":"False"
	I0805 16:10:16.959766    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:16.959781    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:16.959789    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:16.959792    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:16.962161    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:16.962538    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:16.962546    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:16.962551    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:16.962554    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:16.964199    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:17.459940    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:17.460028    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:17.460043    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:17.460058    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:17.463177    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:17.463929    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:17.463939    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:17.463947    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:17.463954    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:17.465814    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:17.960492    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:17.960517    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:17.960529    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:17.960535    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:17.963854    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:17.964340    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:17.964348    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:17.964354    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:17.964359    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:17.965846    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:18.459859    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:18.459922    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:18.459934    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:18.459943    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:18.463097    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:18.463745    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:18.463756    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:18.463764    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:18.463769    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:18.466108    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:18.466647    4013 pod_ready.go:102] pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace has status "Ready":"False"
	I0805 16:10:18.961260    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:18.961336    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:18.961346    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:18.961351    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:18.964473    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:18.964862    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:18.964870    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:18.964876    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:18.964879    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:18.966810    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:19.461327    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:19.461342    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:19.461349    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:19.461352    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:19.463586    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:19.464052    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:19.464061    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:19.464067    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:19.464071    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:19.465827    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:19.959893    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:19.959916    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:19.959928    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:19.959936    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:19.963708    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:19.964323    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:19.964330    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:19.964337    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:19.964341    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:19.966276    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:20.460973    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:20.460999    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:20.461012    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:20.461019    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:20.464211    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:20.464772    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:20.464780    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:20.464786    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:20.464790    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:20.466297    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:20.466755    4013 pod_ready.go:102] pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace has status "Ready":"False"
	I0805 16:10:20.960914    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:20.960928    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:20.960937    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:20.960940    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:20.963464    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:20.963838    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:20.963846    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:20.963851    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:20.963855    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:20.965570    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:21.461564    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:21.461601    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:21.461612    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:21.461617    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:21.464031    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:21.464425    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:21.464433    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:21.464439    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:21.464442    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:21.466022    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:21.960219    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:21.960247    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:21.960261    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:21.960271    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:21.963797    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:21.964415    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:21.964422    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:21.964428    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:21.964431    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:21.966018    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:22.460781    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:22.460829    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:22.460837    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:22.460841    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:22.463024    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:22.463683    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:22.463691    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:22.463697    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:22.463701    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:22.465467    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:22.960911    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:22.960935    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:22.960982    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:22.960999    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:22.964197    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:22.964786    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:22.964793    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:22.964799    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:22.964802    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:22.966466    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:22.966844    4013 pod_ready.go:92] pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:22.966853    4013 pod_ready.go:81] duration metric: took 13.007198003s for pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:22.966869    4013 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:22.966901    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-968000
	I0805 16:10:22.966906    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:22.966912    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:22.966916    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:22.968437    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:22.968826    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:22.968833    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:22.968839    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:22.968842    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:22.970427    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:22.970912    4013 pod_ready.go:92] pod "etcd-ha-968000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:22.970922    4013 pod_ready.go:81] duration metric: took 4.046965ms for pod "etcd-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:22.970928    4013 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:22.970963    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-968000-m02
	I0805 16:10:22.970968    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:22.970973    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:22.970978    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:22.972820    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:22.973377    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:10:22.973385    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:22.973391    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:22.973395    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:22.975041    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:22.975357    4013 pod_ready.go:92] pod "etcd-ha-968000-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:22.975366    4013 pod_ready.go:81] duration metric: took 4.433286ms for pod "etcd-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:22.975373    4013 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:22.975410    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-968000-m03
	I0805 16:10:22.975415    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:22.975421    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:22.975428    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:22.977033    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:22.977409    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:22.977416    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:22.977422    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:22.977425    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:22.978990    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:23.477076    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-968000-m03
	I0805 16:10:23.477102    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:23.477114    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:23.477120    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:23.480444    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:23.480920    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:23.480927    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:23.480934    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:23.480937    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:23.482684    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:23.976407    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-968000-m03
	I0805 16:10:23.976432    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:23.976443    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:23.976450    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:23.979450    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:23.979998    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:23.980005    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:23.980011    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:23.980015    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:23.981679    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:24.476784    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-968000-m03
	I0805 16:10:24.476798    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:24.476805    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:24.476814    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:24.479014    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:24.479514    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:24.479522    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:24.479528    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:24.479531    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:24.481269    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:24.481711    4013 pod_ready.go:92] pod "etcd-ha-968000-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:24.481720    4013 pod_ready.go:81] duration metric: took 1.506341693s for pod "etcd-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:24.481735    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:24.481776    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-968000
	I0805 16:10:24.481781    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:24.481787    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:24.481791    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:24.483526    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:24.483895    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:24.483903    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:24.483909    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:24.483913    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:24.485324    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:24.485707    4013 pod_ready.go:92] pod "kube-apiserver-ha-968000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:24.485716    4013 pod_ready.go:81] duration metric: took 3.976033ms for pod "kube-apiserver-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:24.485725    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:24.485755    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-968000-m02
	I0805 16:10:24.485761    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:24.485766    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:24.485771    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:24.487225    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:24.561028    4013 request.go:629] Waited for 73.447214ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:10:24.561115    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:10:24.561127    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:24.561139    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:24.561146    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:24.564386    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:24.564772    4013 pod_ready.go:92] pod "kube-apiserver-ha-968000-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:24.564785    4013 pod_ready.go:81] duration metric: took 79.054588ms for pod "kube-apiserver-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:24.564795    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:24.761641    4013 request.go:629] Waited for 196.793833ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-968000-m03
	I0805 16:10:24.761722    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-968000-m03
	I0805 16:10:24.761728    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:24.761734    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:24.761738    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:24.763753    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:24.961783    4013 request.go:629] Waited for 197.554669ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:24.961853    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:24.961860    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:24.961868    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:24.961872    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:24.964254    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:24.964712    4013 pod_ready.go:92] pod "kube-apiserver-ha-968000-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:24.964722    4013 pod_ready.go:81] duration metric: took 399.920246ms for pod "kube-apiserver-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:24.964728    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:25.161961    4013 request.go:629] Waited for 197.196834ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000
	I0805 16:10:25.162018    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000
	I0805 16:10:25.162024    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:25.162028    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:25.162032    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:25.164098    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:25.362062    4013 request.go:629] Waited for 197.590252ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:25.362143    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:25.362150    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:25.362158    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:25.362164    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:25.364469    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:25.364982    4013 pod_ready.go:92] pod "kube-controller-manager-ha-968000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:25.364995    4013 pod_ready.go:81] duration metric: took 400.260627ms for pod "kube-controller-manager-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:25.365004    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:25.561095    4013 request.go:629] Waited for 196.05214ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m02
	I0805 16:10:25.561139    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m02
	I0805 16:10:25.561147    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:25.561173    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:25.561180    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:25.563313    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:25.761969    4013 request.go:629] Waited for 198.293569ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:10:25.762009    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:10:25.762016    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:25.762027    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:25.762062    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:25.764659    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:25.765098    4013 pod_ready.go:92] pod "kube-controller-manager-ha-968000-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:25.765107    4013 pod_ready.go:81] duration metric: took 400.096353ms for pod "kube-controller-manager-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:25.765120    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:25.961382    4013 request.go:629] Waited for 196.226504ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m03
	I0805 16:10:25.961416    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m03
	I0805 16:10:25.961422    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:25.961434    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:25.961446    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:25.963534    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:26.162364    4013 request.go:629] Waited for 198.280605ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:26.162397    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:26.162402    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:26.162408    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:26.162412    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:26.164357    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:26.362197    4013 request.go:629] Waited for 94.915828ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m03
	I0805 16:10:26.362260    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m03
	I0805 16:10:26.362266    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:26.362273    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:26.362276    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:26.364350    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:26.562545    4013 request.go:629] Waited for 197.745091ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:26.562624    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:26.562630    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:26.562637    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:26.562640    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:26.565319    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:26.767236    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m03
	I0805 16:10:26.767251    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:26.767257    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:26.767262    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:26.769341    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:26.962089    4013 request.go:629] Waited for 192.24367ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:26.962162    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:26.962168    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:26.962175    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:26.962178    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:26.964212    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:27.267240    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m03
	I0805 16:10:27.267258    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:27.267266    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:27.267270    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:27.269879    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:27.362824    4013 request.go:629] Waited for 92.466824ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:27.362855    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:27.362861    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:27.362867    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:27.362873    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:27.364886    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:27.365316    4013 pod_ready.go:92] pod "kube-controller-manager-ha-968000-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:27.365326    4013 pod_ready.go:81] duration metric: took 1.600199608s for pod "kube-controller-manager-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:27.365333    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fvd5q" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:27.562545    4013 request.go:629] Waited for 197.173723ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvd5q
	I0805 16:10:27.562641    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvd5q
	I0805 16:10:27.562650    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:27.562667    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:27.562672    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:27.564919    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:27.762505    4013 request.go:629] Waited for 197.212423ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:10:27.762538    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:10:27.762543    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:27.762549    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:27.762554    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:27.764932    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:27.765395    4013 pod_ready.go:92] pod "kube-proxy-fvd5q" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:27.765405    4013 pod_ready.go:81] duration metric: took 400.066585ms for pod "kube-proxy-fvd5q" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:27.765413    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p4xgk" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:27.962081    4013 request.go:629] Waited for 196.624809ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p4xgk
	I0805 16:10:27.962208    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p4xgk
	I0805 16:10:27.962219    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:27.962231    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:27.962265    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:27.965643    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:28.161558    4013 request.go:629] Waited for 195.152397ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:28.161641    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:28.161650    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:28.161658    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:28.161662    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:28.164062    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:28.164477    4013 pod_ready.go:92] pod "kube-proxy-p4xgk" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:28.164486    4013 pod_ready.go:81] duration metric: took 399.068204ms for pod "kube-proxy-p4xgk" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:28.164494    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qptt6" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:28.362129    4013 request.go:629] Waited for 197.598336ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qptt6
	I0805 16:10:28.362162    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qptt6
	I0805 16:10:28.362167    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:28.362173    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:28.362177    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:28.364194    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:28.561667    4013 request.go:629] Waited for 196.999586ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m04
	I0805 16:10:28.561700    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m04
	I0805 16:10:28.561748    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:28.561756    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:28.561759    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:28.564274    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:28.564561    4013 pod_ready.go:97] node "ha-968000-m04" hosting pod "kube-proxy-qptt6" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-968000-m04" has status "Ready":"Unknown"
	I0805 16:10:28.564573    4013 pod_ready.go:81] duration metric: took 400.073458ms for pod "kube-proxy-qptt6" in "kube-system" namespace to be "Ready" ...
	E0805 16:10:28.564580    4013 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-968000-m04" hosting pod "kube-proxy-qptt6" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-968000-m04" has status "Ready":"Unknown"
	I0805 16:10:28.564585    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v87jb" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:28.761155    4013 request.go:629] Waited for 196.536425ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v87jb
	I0805 16:10:28.761194    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v87jb
	I0805 16:10:28.761220    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:28.761235    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:28.761241    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:28.763501    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:28.962341    4013 request.go:629] Waited for 198.29849ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:28.962395    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:28.962429    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:28.962455    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:28.962470    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:28.965239    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:28.965595    4013 pod_ready.go:92] pod "kube-proxy-v87jb" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:28.965603    4013 pod_ready.go:81] duration metric: took 401.013479ms for pod "kube-proxy-v87jb" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:28.965611    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:29.161737    4013 request.go:629] Waited for 196.060247ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000
	I0805 16:10:29.161876    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000
	I0805 16:10:29.161889    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:29.161901    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:29.161907    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:29.165617    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:29.361022    4013 request.go:629] Waited for 194.748045ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:29.361106    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:29.361115    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:29.361123    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:29.361133    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:29.363092    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:29.363445    4013 pod_ready.go:92] pod "kube-scheduler-ha-968000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:29.363455    4013 pod_ready.go:81] duration metric: took 397.839229ms for pod "kube-scheduler-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:29.363462    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:29.562518    4013 request.go:629] Waited for 199.009741ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000-m02
	I0805 16:10:29.562602    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000-m02
	I0805 16:10:29.562608    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:29.562616    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:29.562621    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:29.565612    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:29.761127    4013 request.go:629] Waited for 195.236074ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:10:29.761159    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:10:29.761163    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:29.761169    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:29.761174    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:29.763545    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:29.764045    4013 pod_ready.go:92] pod "kube-scheduler-ha-968000-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:29.764056    4013 pod_ready.go:81] duration metric: took 400.588926ms for pod "kube-scheduler-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:29.764063    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:29.961261    4013 request.go:629] Waited for 197.156425ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000-m03
	I0805 16:10:29.961356    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000-m03
	I0805 16:10:29.961365    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:29.961373    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:29.961379    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:29.963937    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:30.162354    4013 request.go:629] Waited for 197.925421ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:30.162411    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:30.162422    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:30.162485    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:30.162494    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:30.165503    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:30.166291    4013 pod_ready.go:92] pod "kube-scheduler-ha-968000-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:30.166300    4013 pod_ready.go:81] duration metric: took 402.232052ms for pod "kube-scheduler-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:30.166308    4013 pod_ready.go:38] duration metric: took 20.225431391s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:10:30.166322    4013 api_server.go:52] waiting for apiserver process to appear ...
	I0805 16:10:30.166373    4013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:10:30.178781    4013 api_server.go:72] duration metric: took 20.408716061s to wait for apiserver process to appear ...
	I0805 16:10:30.178794    4013 api_server.go:88] waiting for apiserver healthz status ...
	I0805 16:10:30.178806    4013 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0805 16:10:30.181777    4013 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0805 16:10:30.181817    4013 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0805 16:10:30.181822    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:30.181828    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:30.181832    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:30.182461    4013 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:10:30.182514    4013 api_server.go:141] control plane version: v1.30.3
	I0805 16:10:30.182522    4013 api_server.go:131] duration metric: took 3.723541ms to wait for apiserver health ...
	I0805 16:10:30.182527    4013 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 16:10:30.361346    4013 request.go:629] Waited for 178.775767ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0805 16:10:30.361395    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0805 16:10:30.361407    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:30.361483    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:30.361495    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:30.367528    4013 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0805 16:10:30.373218    4013 system_pods.go:59] 26 kube-system pods found
	I0805 16:10:30.373231    4013 system_pods.go:61] "coredns-7db6d8ff4d-hjp5z" [e31fd97b-2727-4db3-a17c-3302c320832b] Running
	I0805 16:10:30.373242    4013 system_pods.go:61] "coredns-7db6d8ff4d-mfzln" [ea5c136e-84a6-4253-8f61-85c427b83840] Running
	I0805 16:10:30.373246    4013 system_pods.go:61] "etcd-ha-968000" [24590478-199e-4d78-8312-3d5924d6e915] Running
	I0805 16:10:30.373249    4013 system_pods.go:61] "etcd-ha-968000-m02" [cefe6f5a-3a87-4ccf-9419-0b864275c9c9] Running
	I0805 16:10:30.373253    4013 system_pods.go:61] "etcd-ha-968000-m03" [ec752887-5a12-4888-ba88-3fb5d54c6ce7] Running
	I0805 16:10:30.373255    4013 system_pods.go:61] "kindnet-5dshm" [2641d2a9-a26a-4cbe-b8ea-99ed7c7af43c] Running
	I0805 16:10:30.373258    4013 system_pods.go:61] "kindnet-cglm9" [80a5d2ca-3d9f-4347-bb68-cd6eac4e4aa8] Running
	I0805 16:10:30.373261    4013 system_pods.go:61] "kindnet-fp5ns" [bf9c4454-9491-4a21-8f0a-6c6f21919551] Running
	I0805 16:10:30.373267    4013 system_pods.go:61] "kindnet-qh6l6" [382ac149-5a4e-4fe4-aaaa-9c929c93b101] Running
	I0805 16:10:30.373270    4013 system_pods.go:61] "kube-apiserver-ha-968000" [04e9a721-eb6e-47b4-a7f0-2cad1ee201f7] Running
	I0805 16:10:30.373272    4013 system_pods.go:61] "kube-apiserver-ha-968000-m02" [0465a825-6697-4a98-bb88-18df7929a5dd] Running
	I0805 16:10:30.373275    4013 system_pods.go:61] "kube-apiserver-ha-968000-m03" [a0d3fc83-9820-463e-81bb-2abcb1b4c868] Running
	I0805 16:10:30.373278    4013 system_pods.go:61] "kube-controller-manager-ha-968000" [2078d070-21b4-4d47-a4d3-b130fa8b3aaf] Running
	I0805 16:10:30.373280    4013 system_pods.go:61] "kube-controller-manager-ha-968000-m02" [f0a1cc06-05bb-4efa-9a53-ebccba2b5f9e] Running
	I0805 16:10:30.373283    4013 system_pods.go:61] "kube-controller-manager-ha-968000-m03" [d140abba-93f2-4062-8ee8-3918ff5ae882] Running
	I0805 16:10:30.373286    4013 system_pods.go:61] "kube-proxy-fvd5q" [f2f13535-5802-4a1c-8243-48de42b79e74] Running
	I0805 16:10:30.373290    4013 system_pods.go:61] "kube-proxy-p4xgk" [aaca6036-f95c-44fb-a358-5ac881148fa4] Running
	I0805 16:10:30.373293    4013 system_pods.go:61] "kube-proxy-qptt6" [a826a636-1d05-4cca-a56d-d25a9cf41506] Running
	I0805 16:10:30.373296    4013 system_pods.go:61] "kube-proxy-v87jb" [d98f61ac-3a61-452c-8507-7258a9703c15] Running
	I0805 16:10:30.373298    4013 system_pods.go:61] "kube-scheduler-ha-968000" [20bf4b5e-71a1-4708-bb6a-34b0e44f196d] Running
	I0805 16:10:30.373301    4013 system_pods.go:61] "kube-scheduler-ha-968000-m02" [e590d5bf-9517-433b-9759-5b0f16cfe9a9] Running
	I0805 16:10:30.373303    4013 system_pods.go:61] "kube-scheduler-ha-968000-m03" [91120005-f0b0-47d5-a91c-c06b12e6da3e] Running
	I0805 16:10:30.373306    4013 system_pods.go:61] "kube-vip-ha-968000" [ac1aab33-b1d7-4b08-bde4-1bbd87c671f6] Running
	I0805 16:10:30.373308    4013 system_pods.go:61] "kube-vip-ha-968000-m02" [713fc36a-5582-464c-82d3-02905c81b753] Running
	I0805 16:10:30.373311    4013 system_pods.go:61] "kube-vip-ha-968000-m03" [d94a7e1c-9ddd-4229-b4cd-ac05384dd20a] Running
	I0805 16:10:30.373315    4013 system_pods.go:61] "storage-provisioner" [52e2952a-756d-4f65-84f5-588cb6563297] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0805 16:10:30.373320    4013 system_pods.go:74] duration metric: took 190.788685ms to wait for pod list to return data ...
	I0805 16:10:30.373327    4013 default_sa.go:34] waiting for default service account to be created ...
	I0805 16:10:30.561033    4013 request.go:629] Waited for 187.657545ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:10:30.561084    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:10:30.561123    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:30.561138    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:30.561146    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:30.564680    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:30.564786    4013 default_sa.go:45] found service account: "default"
	I0805 16:10:30.564796    4013 default_sa.go:55] duration metric: took 191.464074ms for default service account to be created ...
	I0805 16:10:30.564801    4013 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 16:10:30.761949    4013 request.go:629] Waited for 197.098715ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0805 16:10:30.762013    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0805 16:10:30.762021    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:30.762029    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:30.762035    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:30.768776    4013 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0805 16:10:30.774173    4013 system_pods.go:86] 26 kube-system pods found
	I0805 16:10:30.774191    4013 system_pods.go:89] "coredns-7db6d8ff4d-hjp5z" [e31fd97b-2727-4db3-a17c-3302c320832b] Running
	I0805 16:10:30.774196    4013 system_pods.go:89] "coredns-7db6d8ff4d-mfzln" [ea5c136e-84a6-4253-8f61-85c427b83840] Running
	I0805 16:10:30.774200    4013 system_pods.go:89] "etcd-ha-968000" [24590478-199e-4d78-8312-3d5924d6e915] Running
	I0805 16:10:30.774203    4013 system_pods.go:89] "etcd-ha-968000-m02" [cefe6f5a-3a87-4ccf-9419-0b864275c9c9] Running
	I0805 16:10:30.774207    4013 system_pods.go:89] "etcd-ha-968000-m03" [ec752887-5a12-4888-ba88-3fb5d54c6ce7] Running
	I0805 16:10:30.774211    4013 system_pods.go:89] "kindnet-5dshm" [2641d2a9-a26a-4cbe-b8ea-99ed7c7af43c] Running
	I0805 16:10:30.774214    4013 system_pods.go:89] "kindnet-cglm9" [80a5d2ca-3d9f-4347-bb68-cd6eac4e4aa8] Running
	I0805 16:10:30.774219    4013 system_pods.go:89] "kindnet-fp5ns" [bf9c4454-9491-4a21-8f0a-6c6f21919551] Running
	I0805 16:10:30.774222    4013 system_pods.go:89] "kindnet-qh6l6" [382ac149-5a4e-4fe4-aaaa-9c929c93b101] Running
	I0805 16:10:30.774225    4013 system_pods.go:89] "kube-apiserver-ha-968000" [04e9a721-eb6e-47b4-a7f0-2cad1ee201f7] Running
	I0805 16:10:30.774229    4013 system_pods.go:89] "kube-apiserver-ha-968000-m02" [0465a825-6697-4a98-bb88-18df7929a5dd] Running
	I0805 16:10:30.774232    4013 system_pods.go:89] "kube-apiserver-ha-968000-m03" [a0d3fc83-9820-463e-81bb-2abcb1b4c868] Running
	I0805 16:10:30.774236    4013 system_pods.go:89] "kube-controller-manager-ha-968000" [2078d070-21b4-4d47-a4d3-b130fa8b3aaf] Running
	I0805 16:10:30.774240    4013 system_pods.go:89] "kube-controller-manager-ha-968000-m02" [f0a1cc06-05bb-4efa-9a53-ebccba2b5f9e] Running
	I0805 16:10:30.774243    4013 system_pods.go:89] "kube-controller-manager-ha-968000-m03" [d140abba-93f2-4062-8ee8-3918ff5ae882] Running
	I0805 16:10:30.774246    4013 system_pods.go:89] "kube-proxy-fvd5q" [f2f13535-5802-4a1c-8243-48de42b79e74] Running
	I0805 16:10:30.774250    4013 system_pods.go:89] "kube-proxy-p4xgk" [aaca6036-f95c-44fb-a358-5ac881148fa4] Running
	I0805 16:10:30.774253    4013 system_pods.go:89] "kube-proxy-qptt6" [a826a636-1d05-4cca-a56d-d25a9cf41506] Running
	I0805 16:10:30.774257    4013 system_pods.go:89] "kube-proxy-v87jb" [d98f61ac-3a61-452c-8507-7258a9703c15] Running
	I0805 16:10:30.774261    4013 system_pods.go:89] "kube-scheduler-ha-968000" [20bf4b5e-71a1-4708-bb6a-34b0e44f196d] Running
	I0805 16:10:30.774265    4013 system_pods.go:89] "kube-scheduler-ha-968000-m02" [e590d5bf-9517-433b-9759-5b0f16cfe9a9] Running
	I0805 16:10:30.774268    4013 system_pods.go:89] "kube-scheduler-ha-968000-m03" [91120005-f0b0-47d5-a91c-c06b12e6da3e] Running
	I0805 16:10:30.774271    4013 system_pods.go:89] "kube-vip-ha-968000" [ac1aab33-b1d7-4b08-bde4-1bbd87c671f6] Running
	I0805 16:10:30.774275    4013 system_pods.go:89] "kube-vip-ha-968000-m02" [713fc36a-5582-464c-82d3-02905c81b753] Running
	I0805 16:10:30.774281    4013 system_pods.go:89] "kube-vip-ha-968000-m03" [d94a7e1c-9ddd-4229-b4cd-ac05384dd20a] Running
	I0805 16:10:30.774287    4013 system_pods.go:89] "storage-provisioner" [52e2952a-756d-4f65-84f5-588cb6563297] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0805 16:10:30.774292    4013 system_pods.go:126] duration metric: took 209.48655ms to wait for k8s-apps to be running ...
	I0805 16:10:30.774299    4013 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 16:10:30.774355    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:10:30.784922    4013 system_svc.go:56] duration metric: took 10.617828ms WaitForService to wait for kubelet
	I0805 16:10:30.784940    4013 kubeadm.go:582] duration metric: took 21.014875463s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:10:30.784959    4013 node_conditions.go:102] verifying NodePressure condition ...
	I0805 16:10:30.960928    4013 request.go:629] Waited for 175.930639ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0805 16:10:30.960954    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0805 16:10:30.960958    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:30.960965    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:30.960969    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:30.963520    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:30.964254    4013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:10:30.964263    4013 node_conditions.go:123] node cpu capacity is 2
	I0805 16:10:30.964270    4013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:10:30.964274    4013 node_conditions.go:123] node cpu capacity is 2
	I0805 16:10:30.964278    4013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:10:30.964281    4013 node_conditions.go:123] node cpu capacity is 2
	I0805 16:10:30.964284    4013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:10:30.964287    4013 node_conditions.go:123] node cpu capacity is 2
	I0805 16:10:30.964290    4013 node_conditions.go:105] duration metric: took 179.327419ms to run NodePressure ...
	I0805 16:10:30.964299    4013 start.go:241] waiting for startup goroutines ...
	I0805 16:10:30.964314    4013 start.go:255] writing updated cluster config ...
	I0805 16:10:30.985934    4013 out.go:177] 
	I0805 16:10:31.006970    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:10:31.007089    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:10:31.028647    4013 out.go:177] * Starting "ha-968000-m04" worker node in "ha-968000" cluster
	I0805 16:10:31.070449    4013 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:10:31.070470    4013 cache.go:56] Caching tarball of preloaded images
	I0805 16:10:31.070587    4013 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:10:31.070597    4013 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:10:31.070661    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:10:31.071212    4013 start.go:360] acquireMachinesLock for ha-968000-m04: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:10:31.071274    4013 start.go:364] duration metric: took 48.958µs to acquireMachinesLock for "ha-968000-m04"
	I0805 16:10:31.071288    4013 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:10:31.071292    4013 fix.go:54] fixHost starting: m04
	I0805 16:10:31.071532    4013 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:10:31.071551    4013 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:10:31.080682    4013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51965
	I0805 16:10:31.081033    4013 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:10:31.081390    4013 main.go:141] libmachine: Using API Version  1
	I0805 16:10:31.081404    4013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:10:31.081602    4013 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:10:31.081699    4013 main.go:141] libmachine: (ha-968000-m04) Calling .DriverName
	I0805 16:10:31.081797    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetState
	I0805 16:10:31.081874    4013 main.go:141] libmachine: (ha-968000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:10:31.081960    4013 main.go:141] libmachine: (ha-968000-m04) DBG | hyperkit pid from json: 3587
	I0805 16:10:31.082940    4013 main.go:141] libmachine: (ha-968000-m04) DBG | hyperkit pid 3587 missing from process table
	I0805 16:10:31.082969    4013 fix.go:112] recreateIfNeeded on ha-968000-m04: state=Stopped err=<nil>
	I0805 16:10:31.082980    4013 main.go:141] libmachine: (ha-968000-m04) Calling .DriverName
	W0805 16:10:31.083071    4013 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:10:31.103629    4013 out.go:177] * Restarting existing hyperkit VM for "ha-968000-m04" ...
	I0805 16:10:31.144437    4013 main.go:141] libmachine: (ha-968000-m04) Calling .Start
	I0805 16:10:31.144560    4013 main.go:141] libmachine: (ha-968000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:10:31.144576    4013 main.go:141] libmachine: (ha-968000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/hyperkit.pid
	I0805 16:10:31.144624    4013 main.go:141] libmachine: (ha-968000-m04) DBG | Using UUID a18c3228-c5cd-4311-88be-5c31f452a5bc
	I0805 16:10:31.170211    4013 main.go:141] libmachine: (ha-968000-m04) DBG | Generated MAC 2e:80:64:4a:6a:1a
	I0805 16:10:31.170234    4013 main.go:141] libmachine: (ha-968000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000
	I0805 16:10:31.170385    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a18c3228-c5cd-4311-88be-5c31f452a5bc", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002ad770)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:10:31.170420    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a18c3228-c5cd-4311-88be-5c31f452a5bc", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002ad770)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:10:31.170473    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "a18c3228-c5cd-4311-88be-5c31f452a5bc", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/ha-968000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machine
s/ha-968000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000"}
	I0805 16:10:31.170506    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U a18c3228-c5cd-4311-88be-5c31f452a5bc -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/ha-968000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000"
	I0805 16:10:31.170534    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:10:31.171899    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 DEBUG: hyperkit: Pid is 4076
	I0805 16:10:31.172381    4013 main.go:141] libmachine: (ha-968000-m04) DBG | Attempt 0
	I0805 16:10:31.172398    4013 main.go:141] libmachine: (ha-968000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:10:31.172450    4013 main.go:141] libmachine: (ha-968000-m04) DBG | hyperkit pid from json: 4076
	I0805 16:10:31.173609    4013 main.go:141] libmachine: (ha-968000-m04) DBG | Searching for 2e:80:64:4a:6a:1a in /var/db/dhcpd_leases ...
	I0805 16:10:31.173677    4013 main.go:141] libmachine: (ha-968000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0805 16:10:31.173696    4013 main.go:141] libmachine: (ha-968000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b2ad30}
	I0805 16:10:31.173728    4013 main.go:141] libmachine: (ha-968000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:10:31.173759    4013 main.go:141] libmachine: (ha-968000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2acfd}
	I0805 16:10:31.173793    4013 main.go:141] libmachine: (ha-968000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b15b5a}
	I0805 16:10:31.173811    4013 main.go:141] libmachine: (ha-968000-m04) DBG | Found match: 2e:80:64:4a:6a:1a
	I0805 16:10:31.173825    4013 main.go:141] libmachine: (ha-968000-m04) DBG | IP: 192.169.0.8
	I0805 16:10:31.173829    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetConfigRaw
	I0805 16:10:31.174658    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetIP
	I0805 16:10:31.174867    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:10:31.175539    4013 machine.go:94] provisionDockerMachine start ...
	I0805 16:10:31.175554    4013 main.go:141] libmachine: (ha-968000-m04) Calling .DriverName
	I0805 16:10:31.175674    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:10:31.175766    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:10:31.175918    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:10:31.176065    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:10:31.176193    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:10:31.176341    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:10:31.176494    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0805 16:10:31.176502    4013 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 16:10:31.179979    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:10:31.189022    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:10:31.190141    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:10:31.190167    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:10:31.190183    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:10:31.190196    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:10:31.578293    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:10:31.578309    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:10:31.693368    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:10:31.693393    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:10:31.693424    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:10:31.693448    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:10:31.694196    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:10:31.694209    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:10:37.416235    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:10:37.416360    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:10:37.416373    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:10:37.440251    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:37 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:11:06.247173    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 16:11:06.247187    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetMachineName
	I0805 16:11:06.247309    4013 buildroot.go:166] provisioning hostname "ha-968000-m04"
	I0805 16:11:06.247318    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetMachineName
	I0805 16:11:06.247423    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:06.247508    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:06.247594    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.247671    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.247772    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:06.247899    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:11:06.248060    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0805 16:11:06.248068    4013 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-968000-m04 && echo "ha-968000-m04" | sudo tee /etc/hostname
	I0805 16:11:06.317371    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-968000-m04
	
	I0805 16:11:06.317388    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:06.317526    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:06.317622    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.317715    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.317808    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:06.317937    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:11:06.318101    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0805 16:11:06.318113    4013 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-968000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-968000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-968000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:11:06.382855    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:11:06.382871    4013 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:11:06.382888    4013 buildroot.go:174] setting up certificates
	I0805 16:11:06.382895    4013 provision.go:84] configureAuth start
	I0805 16:11:06.382903    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetMachineName
	I0805 16:11:06.383053    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetIP
	I0805 16:11:06.383164    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:06.383233    4013 provision.go:143] copyHostCerts
	I0805 16:11:06.383260    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:11:06.383324    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:11:06.383330    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:11:06.383467    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:11:06.383688    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:11:06.383735    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:11:06.383741    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:11:06.383821    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:11:06.383965    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:11:06.384005    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:11:06.384009    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:11:06.384091    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:11:06.384243    4013 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.ha-968000-m04 san=[127.0.0.1 192.169.0.8 ha-968000-m04 localhost minikube]
	I0805 16:11:06.441247    4013 provision.go:177] copyRemoteCerts
	I0805 16:11:06.441333    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:11:06.441360    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:06.441582    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:06.441714    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.441797    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:06.441875    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/id_rsa Username:docker}
	I0805 16:11:06.478976    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:11:06.479045    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:11:06.498620    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:11:06.498698    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0805 16:11:06.519415    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:11:06.519486    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:11:06.539397    4013 provision.go:87] duration metric: took 156.493754ms to configureAuth
	I0805 16:11:06.539413    4013 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:11:06.539605    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:11:06.539618    4013 main.go:141] libmachine: (ha-968000-m04) Calling .DriverName
	I0805 16:11:06.539752    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:06.539832    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:06.539911    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.540002    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.540090    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:06.540207    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:11:06.540372    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0805 16:11:06.540380    4013 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:11:06.599043    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:11:06.599055    4013 buildroot.go:70] root file system type: tmpfs
	I0805 16:11:06.599124    4013 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:11:06.599137    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:06.599263    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:06.599347    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.599450    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.599542    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:06.599675    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:11:06.599808    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0805 16:11:06.599855    4013 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:11:06.668751    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	Environment=NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:11:06.668771    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:06.668901    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:06.669001    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.669105    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.669186    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:06.669346    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:11:06.669490    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0805 16:11:06.669502    4013 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:11:08.250301    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:11:08.250316    4013 machine.go:97] duration metric: took 37.074755145s to provisionDockerMachine
	I0805 16:11:08.250324    4013 start.go:293] postStartSetup for "ha-968000-m04" (driver="hyperkit")
	I0805 16:11:08.250332    4013 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:11:08.250344    4013 main.go:141] libmachine: (ha-968000-m04) Calling .DriverName
	I0805 16:11:08.250520    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:11:08.250533    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:08.250626    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:08.250720    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:08.250813    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:08.250900    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/id_rsa Username:docker}
	I0805 16:11:08.286575    4013 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:11:08.289665    4013 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:11:08.289683    4013 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:11:08.289795    4013 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:11:08.289976    4013 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:11:08.289983    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:11:08.290190    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:11:08.297566    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:11:08.317678    4013 start.go:296] duration metric: took 67.345639ms for postStartSetup
	I0805 16:11:08.317700    4013 main.go:141] libmachine: (ha-968000-m04) Calling .DriverName
	I0805 16:11:08.317862    4013 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0805 16:11:08.317884    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:08.317967    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:08.318053    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:08.318144    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:08.318232    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/id_rsa Username:docker}
	I0805 16:11:08.353636    4013 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0805 16:11:08.353694    4013 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0805 16:11:08.385358    4013 fix.go:56] duration metric: took 37.314050272s for fixHost
	I0805 16:11:08.385384    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:08.385514    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:08.385605    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:08.385692    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:08.385761    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:08.385881    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:11:08.386024    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0805 16:11:08.386032    4013 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0805 16:11:08.446465    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722899468.587788631
	
	I0805 16:11:08.446479    4013 fix.go:216] guest clock: 1722899468.587788631
	I0805 16:11:08.446484    4013 fix.go:229] Guest: 2024-08-05 16:11:08.587788631 -0700 PDT Remote: 2024-08-05 16:11:08.385373 -0700 PDT m=+152.742754663 (delta=202.415631ms)
	I0805 16:11:08.446495    4013 fix.go:200] guest clock delta is within tolerance: 202.415631ms
	I0805 16:11:08.446499    4013 start.go:83] releasing machines lock for "ha-968000-m04", held for 37.375207026s
	I0805 16:11:08.446517    4013 main.go:141] libmachine: (ha-968000-m04) Calling .DriverName
	I0805 16:11:08.446647    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetIP
	I0805 16:11:08.469183    4013 out.go:177] * Found network options:
	I0805 16:11:08.489020    4013 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	W0805 16:11:08.509956    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	W0805 16:11:08.509981    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	W0805 16:11:08.509995    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:11:08.510012    4013 main.go:141] libmachine: (ha-968000-m04) Calling .DriverName
	I0805 16:11:08.510694    4013 main.go:141] libmachine: (ha-968000-m04) Calling .DriverName
	I0805 16:11:08.510902    4013 main.go:141] libmachine: (ha-968000-m04) Calling .DriverName
	I0805 16:11:08.510988    4013 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:11:08.511021    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	W0805 16:11:08.511083    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	W0805 16:11:08.511098    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	W0805 16:11:08.511109    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:11:08.511171    4013 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 16:11:08.511183    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:08.511199    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:08.511320    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:08.511356    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:08.511475    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:08.511503    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:08.511579    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/id_rsa Username:docker}
	I0805 16:11:08.511613    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:08.511730    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/id_rsa Username:docker}
	W0805 16:11:08.544454    4013 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:11:08.544519    4013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:11:08.559248    4013 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:11:08.559269    4013 start.go:495] detecting cgroup driver to use...
	I0805 16:11:08.559342    4013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:11:08.597200    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:11:08.605403    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:11:08.613387    4013 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:11:08.613447    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:11:08.621571    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:11:08.629943    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:11:08.638060    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:11:08.646402    4013 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:11:08.654807    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:11:08.662991    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:11:08.671582    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:11:08.680942    4013 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:11:08.688339    4013 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:11:08.695737    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:11:08.798441    4013 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:11:08.816137    4013 start.go:495] detecting cgroup driver to use...
	I0805 16:11:08.816215    4013 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:11:08.835716    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:11:08.847518    4013 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:11:08.867990    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:11:08.879695    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:11:08.890752    4013 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:11:08.914456    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:11:08.925541    4013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:11:08.941237    4013 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:11:08.944245    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:11:08.952235    4013 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:11:08.965768    4013 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:11:09.067675    4013 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:11:09.170165    4013 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:11:09.170197    4013 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:11:09.184139    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:11:09.281548    4013 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:12:10.328097    4013 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.046493334s)
	I0805 16:12:10.328204    4013 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0805 16:12:10.365222    4013 out.go:177] 
	W0805 16:12:10.386312    4013 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 05 23:11:06 ha-968000-m04 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:11:06 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:06.389189042Z" level=info msg="Starting up"
	Aug 05 23:11:06 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:06.389663926Z" level=info msg="containerd not running, starting managed containerd"
	Aug 05 23:11:06 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:06.390143336Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=518
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.408369770Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423348772Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423404929Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423454269Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423464665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423632943Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423651369Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423774064Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423808885Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423821728Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423829007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423935968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.424118672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.425786619Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.425825910Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.425936027Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.425969728Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.426078806Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.426121396Z" level=info msg="metadata content store policy set" policy=shared
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.427587891Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.427669563Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.427705862Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.427719084Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.427779644Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.427908991Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428136864Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428235911Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428270099Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428282071Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428290976Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428299125Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428313845Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428325716Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428339937Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428355366Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428366031Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428374178Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428386784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428406973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428418331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428429739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428438142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428446212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428453990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428461755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428469955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428479423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428486756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428506619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428545500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428559198Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428573033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428581795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428589599Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428635221Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428670612Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428680617Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428689626Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428696156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428800505Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428839684Z" level=info msg="NRI interface is disabled by configuration."
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.429026394Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.429145595Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.429201340Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.429234250Z" level=info msg="containerd successfully booted in 0.021734s"
	Aug 05 23:11:07 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:07.407781552Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 05 23:11:07 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:07.418738721Z" level=info msg="Loading containers: start."
	Aug 05 23:11:07 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:07.516865232Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 05 23:11:07 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:07.582390999Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 05 23:11:08 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:08.356499605Z" level=info msg="Loading containers: done."
	Aug 05 23:11:08 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:08.366049745Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 05 23:11:08 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:08.366234171Z" level=info msg="Daemon has completed initialization"
	Aug 05 23:11:08 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:08.390065153Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 05 23:11:08 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:08.390220880Z" level=info msg="API listen on [::]:2376"
	Aug 05 23:11:08 ha-968000-m04 systemd[1]: Started Docker Application Container Engine.
	Aug 05 23:11:09 ha-968000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Aug 05 23:11:09 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:09.434256146Z" level=info msg="Processing signal 'terminated'"
	Aug 05 23:11:09 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:09.435568971Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 05 23:11:09 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:09.435927759Z" level=info msg="Daemon shutdown complete"
	Aug 05 23:11:09 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:09.436029566Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 05 23:11:09 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:09.436215589Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 05 23:11:10 ha-968000-m04 systemd[1]: docker.service: Deactivated successfully.
	Aug 05 23:11:10 ha-968000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Aug 05 23:11:10 ha-968000-m04 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:11:10 ha-968000-m04 dockerd[1111]: time="2024-08-05T23:11:10.480077702Z" level=info msg="Starting up"
	Aug 05 23:12:10 ha-968000-m04 dockerd[1111]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 05 23:12:10 ha-968000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 05 23:12:10 ha-968000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 05 23:12:10 ha-968000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 05 23:11:06 ha-968000-m04 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:11:06 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:06.389189042Z" level=info msg="Starting up"
	Aug 05 23:11:06 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:06.389663926Z" level=info msg="containerd not running, starting managed containerd"
	Aug 05 23:11:06 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:06.390143336Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=518
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.408369770Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423348772Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423404929Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423454269Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423464665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423632943Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423651369Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423774064Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423808885Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423821728Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423829007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423935968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.424118672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.425786619Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.425825910Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.425936027Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.425969728Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.426078806Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.426121396Z" level=info msg="metadata content store policy set" policy=shared
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.427587891Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.427669563Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.427705862Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.427719084Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.427779644Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.427908991Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428136864Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428235911Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428270099Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428282071Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428290976Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428299125Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428313845Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428325716Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428339937Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428355366Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428366031Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428374178Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428386784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428406973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428418331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428429739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428438142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428446212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428453990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428461755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428469955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428479423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428486756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428506619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428545500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428559198Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428573033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428581795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428589599Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428635221Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428670612Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428680617Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428689626Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428696156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428800505Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428839684Z" level=info msg="NRI interface is disabled by configuration."
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.429026394Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.429145595Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.429201340Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.429234250Z" level=info msg="containerd successfully booted in 0.021734s"
	Aug 05 23:11:07 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:07.407781552Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 05 23:11:07 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:07.418738721Z" level=info msg="Loading containers: start."
	Aug 05 23:11:07 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:07.516865232Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 05 23:11:07 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:07.582390999Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 05 23:11:08 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:08.356499605Z" level=info msg="Loading containers: done."
	Aug 05 23:11:08 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:08.366049745Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 05 23:11:08 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:08.366234171Z" level=info msg="Daemon has completed initialization"
	Aug 05 23:11:08 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:08.390065153Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 05 23:11:08 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:08.390220880Z" level=info msg="API listen on [::]:2376"
	Aug 05 23:11:08 ha-968000-m04 systemd[1]: Started Docker Application Container Engine.
	Aug 05 23:11:09 ha-968000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Aug 05 23:11:09 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:09.434256146Z" level=info msg="Processing signal 'terminated'"
	Aug 05 23:11:09 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:09.435568971Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 05 23:11:09 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:09.435927759Z" level=info msg="Daemon shutdown complete"
	Aug 05 23:11:09 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:09.436029566Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 05 23:11:09 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:09.436215589Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 05 23:11:10 ha-968000-m04 systemd[1]: docker.service: Deactivated successfully.
	Aug 05 23:11:10 ha-968000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Aug 05 23:11:10 ha-968000-m04 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:11:10 ha-968000-m04 dockerd[1111]: time="2024-08-05T23:11:10.480077702Z" level=info msg="Starting up"
	Aug 05 23:12:10 ha-968000-m04 dockerd[1111]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 05 23:12:10 ha-968000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 05 23:12:10 ha-968000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 05 23:12:10 ha-968000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0805 16:12:10.386388    4013 out.go:239] * 
	* 
	W0805 16:12:10.387046    4013 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:12:10.449396    4013 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p ha-968000 -v=7 --alsologtostderr" : exit status 90
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-968000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-968000 -n ha-968000
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-968000 logs -n 25: (3.565233732s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-968000 cp ha-968000-m03:/home/docker/cp-test.txt                                                                          | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | ha-968000-m02:/home/docker/cp-test_ha-968000-m03_ha-968000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-968000 ssh -n                                                                                                             | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | ha-968000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-968000 ssh -n ha-968000-m02 sudo cat                                                                                      | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | /home/docker/cp-test_ha-968000-m03_ha-968000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-968000 cp ha-968000-m03:/home/docker/cp-test.txt                                                                          | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | ha-968000-m04:/home/docker/cp-test_ha-968000-m03_ha-968000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-968000 ssh -n                                                                                                             | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | ha-968000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-968000 ssh -n ha-968000-m04 sudo cat                                                                                      | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | /home/docker/cp-test_ha-968000-m03_ha-968000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-968000 cp testdata/cp-test.txt                                                                                            | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | ha-968000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-968000 ssh -n                                                                                                             | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | ha-968000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-968000 cp ha-968000-m04:/home/docker/cp-test.txt                                                                          | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1635686668/001/cp-test_ha-968000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-968000 ssh -n                                                                                                             | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | ha-968000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-968000 cp ha-968000-m04:/home/docker/cp-test.txt                                                                          | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | ha-968000:/home/docker/cp-test_ha-968000-m04_ha-968000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-968000 ssh -n                                                                                                             | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | ha-968000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-968000 ssh -n ha-968000 sudo cat                                                                                          | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | /home/docker/cp-test_ha-968000-m04_ha-968000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-968000 cp ha-968000-m04:/home/docker/cp-test.txt                                                                          | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | ha-968000-m02:/home/docker/cp-test_ha-968000-m04_ha-968000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-968000 ssh -n                                                                                                             | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | ha-968000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-968000 ssh -n ha-968000-m02 sudo cat                                                                                      | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | /home/docker/cp-test_ha-968000-m04_ha-968000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-968000 cp ha-968000-m04:/home/docker/cp-test.txt                                                                          | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | ha-968000-m03:/home/docker/cp-test_ha-968000-m04_ha-968000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-968000 ssh -n                                                                                                             | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | ha-968000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-968000 ssh -n ha-968000-m03 sudo cat                                                                                      | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | /home/docker/cp-test_ha-968000-m04_ha-968000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-968000 node stop m02 -v=7                                                                                                 | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-968000 node start m02 -v=7                                                                                                | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:08 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-968000 -v=7                                                                                                       | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:08 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-968000 -v=7                                                                                                            | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:08 PDT | 05 Aug 24 16:08 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-968000 --wait=true -v=7                                                                                                | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:08 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-968000                                                                                                            | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:12 PDT |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 16:08:35
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 16:08:35.679541    4013 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:08:35.680318    4013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:08:35.680328    4013 out.go:304] Setting ErrFile to fd 2...
	I0805 16:08:35.680346    4013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:08:35.680972    4013 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:08:35.682707    4013 out.go:298] Setting JSON to false
	I0805 16:08:35.706964    4013 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2286,"bootTime":1722897029,"procs":430,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0805 16:08:35.707087    4013 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:08:35.728606    4013 out.go:177] * [ha-968000] minikube v1.33.1 on Darwin 14.5
	I0805 16:08:35.770605    4013 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:08:35.770660    4013 notify.go:220] Checking for updates...
	I0805 16:08:35.813604    4013 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:08:35.834532    4013 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0805 16:08:35.855464    4013 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:08:35.876389    4013 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:08:35.897688    4013 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:08:35.919248    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:08:35.919436    4013 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:08:35.920085    4013 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:08:35.920151    4013 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:08:35.929520    4013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51884
	I0805 16:08:35.929878    4013 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:08:35.930279    4013 main.go:141] libmachine: Using API Version  1
	I0805 16:08:35.930302    4013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:08:35.930554    4013 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:08:35.930686    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:08:35.959618    4013 out.go:177] * Using the hyperkit driver based on existing profile
	I0805 16:08:36.001252    4013 start.go:297] selected driver: hyperkit
	I0805 16:08:36.001281    4013 start.go:901] validating driver "hyperkit" against &{Name:ha-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:ha-968000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:08:36.001519    4013 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:08:36.001702    4013 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:08:36.001927    4013 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19373-1122/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0805 16:08:36.011596    4013 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0805 16:08:36.017027    4013 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:08:36.017051    4013 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0805 16:08:36.020140    4013 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:08:36.020202    4013 cni.go:84] Creating CNI manager for ""
	I0805 16:08:36.020212    4013 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0805 16:08:36.020294    4013 start.go:340] cluster config:
	{Name:ha-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-968000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:08:36.020400    4013 iso.go:125] acquiring lock: {Name:mk71e8d40232ece83c91dc82184f03ab93aee56e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:08:36.062580    4013 out.go:177] * Starting "ha-968000" primary control-plane node in "ha-968000" cluster
	I0805 16:08:36.085413    4013 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:08:36.085486    4013 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0805 16:08:36.085505    4013 cache.go:56] Caching tarball of preloaded images
	I0805 16:08:36.085698    4013 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:08:36.085718    4013 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:08:36.085921    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:08:36.086796    4013 start.go:360] acquireMachinesLock for ha-968000: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:08:36.086915    4013 start.go:364] duration metric: took 94.676µs to acquireMachinesLock for "ha-968000"
	I0805 16:08:36.086955    4013 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:08:36.086972    4013 fix.go:54] fixHost starting: 
	I0805 16:08:36.087391    4013 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:08:36.087423    4013 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:08:36.096218    4013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51886
	I0805 16:08:36.096566    4013 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:08:36.096926    4013 main.go:141] libmachine: Using API Version  1
	I0805 16:08:36.096939    4013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:08:36.097199    4013 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:08:36.097327    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:08:36.097443    4013 main.go:141] libmachine: (ha-968000) Calling .GetState
	I0805 16:08:36.097545    4013 main.go:141] libmachine: (ha-968000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:08:36.097604    4013 main.go:141] libmachine: (ha-968000) DBG | hyperkit pid from json: 3418
	I0805 16:08:36.098523    4013 main.go:141] libmachine: (ha-968000) DBG | hyperkit pid 3418 missing from process table
	I0805 16:08:36.098563    4013 fix.go:112] recreateIfNeeded on ha-968000: state=Stopped err=<nil>
	I0805 16:08:36.098579    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	W0805 16:08:36.098669    4013 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:08:36.140439    4013 out.go:177] * Restarting existing hyperkit VM for "ha-968000" ...
	I0805 16:08:36.161262    4013 main.go:141] libmachine: (ha-968000) Calling .Start
	I0805 16:08:36.161541    4013 main.go:141] libmachine: (ha-968000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:08:36.161569    4013 main.go:141] libmachine: (ha-968000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/hyperkit.pid
	I0805 16:08:36.163159    4013 main.go:141] libmachine: (ha-968000) DBG | hyperkit pid 3418 missing from process table
	I0805 16:08:36.163172    4013 main.go:141] libmachine: (ha-968000) DBG | pid 3418 is in state "Stopped"
	I0805 16:08:36.163189    4013 main.go:141] libmachine: (ha-968000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/hyperkit.pid...
	I0805 16:08:36.163382    4013 main.go:141] libmachine: (ha-968000) DBG | Using UUID a9f347e2-e9fc-4e4f-b87b-350754bafb6d
	I0805 16:08:36.294197    4013 main.go:141] libmachine: (ha-968000) DBG | Generated MAC 3e:79:a8:cb:37:4b
	I0805 16:08:36.294223    4013 main.go:141] libmachine: (ha-968000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000
	I0805 16:08:36.294340    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a9f347e2-e9fc-4e4f-b87b-350754bafb6d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c4780)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:08:36.294368    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a9f347e2-e9fc-4e4f-b87b-350754bafb6d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c4780)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:08:36.294409    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "a9f347e2-e9fc-4e4f-b87b-350754bafb6d", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/ha-968000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000"}
	I0805 16:08:36.294446    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U a9f347e2-e9fc-4e4f-b87b-350754bafb6d -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/ha-968000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000"
	I0805 16:08:36.294464    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:08:36.295966    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 DEBUG: hyperkit: Pid is 4025
	I0805 16:08:36.296384    4013 main.go:141] libmachine: (ha-968000) DBG | Attempt 0
	I0805 16:08:36.296402    4013 main.go:141] libmachine: (ha-968000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:08:36.296476    4013 main.go:141] libmachine: (ha-968000) DBG | hyperkit pid from json: 4025
	I0805 16:08:36.298241    4013 main.go:141] libmachine: (ha-968000) DBG | Searching for 3e:79:a8:cb:37:4b in /var/db/dhcpd_leases ...
	I0805 16:08:36.298320    4013 main.go:141] libmachine: (ha-968000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0805 16:08:36.298334    4013 main.go:141] libmachine: (ha-968000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b15b5a}
	I0805 16:08:36.298341    4013 main.go:141] libmachine: (ha-968000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2acb6}
	I0805 16:08:36.298352    4013 main.go:141] libmachine: (ha-968000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b2ac1c}
	I0805 16:08:36.298378    4013 main.go:141] libmachine: (ha-968000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2ab94}
	I0805 16:08:36.298390    4013 main.go:141] libmachine: (ha-968000) DBG | Found match: 3e:79:a8:cb:37:4b
	I0805 16:08:36.298400    4013 main.go:141] libmachine: (ha-968000) DBG | IP: 192.169.0.5
	I0805 16:08:36.298431    4013 main.go:141] libmachine: (ha-968000) Calling .GetConfigRaw
	I0805 16:08:36.299288    4013 main.go:141] libmachine: (ha-968000) Calling .GetIP
	I0805 16:08:36.299496    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:08:36.299907    4013 machine.go:94] provisionDockerMachine start ...
	I0805 16:08:36.299917    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:08:36.300052    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:36.300161    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:36.300278    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:36.300399    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:36.300504    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:36.300629    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:08:36.300879    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0805 16:08:36.300887    4013 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 16:08:36.304094    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:08:36.358116    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:08:36.358849    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:08:36.358861    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:08:36.358871    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:08:36.358879    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:08:36.744699    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:08:36.744726    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:08:36.859121    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:08:36.859139    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:08:36.859155    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:08:36.859188    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:08:36.860075    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:08:36.860087    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:08:42.442082    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:42 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:08:42.442122    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:42 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:08:42.442133    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:42 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:08:42.468515    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:42 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:08:47.381320    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 16:08:47.381334    4013 main.go:141] libmachine: (ha-968000) Calling .GetMachineName
	I0805 16:08:47.381494    4013 buildroot.go:166] provisioning hostname "ha-968000"
	I0805 16:08:47.381505    4013 main.go:141] libmachine: (ha-968000) Calling .GetMachineName
	I0805 16:08:47.381614    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:47.381731    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:47.381824    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.381916    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.382009    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:47.382131    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:08:47.382292    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0805 16:08:47.382300    4013 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-968000 && echo "ha-968000" | sudo tee /etc/hostname
	I0805 16:08:47.461361    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-968000
	
	I0805 16:08:47.461391    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:47.461523    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:47.461610    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.461697    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.461801    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:47.461927    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:08:47.462076    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0805 16:08:47.462087    4013 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-968000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-968000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-968000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:08:47.534682    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:08:47.534701    4013 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:08:47.534713    4013 buildroot.go:174] setting up certificates
	I0805 16:08:47.534720    4013 provision.go:84] configureAuth start
	I0805 16:08:47.534727    4013 main.go:141] libmachine: (ha-968000) Calling .GetMachineName
	I0805 16:08:47.534861    4013 main.go:141] libmachine: (ha-968000) Calling .GetIP
	I0805 16:08:47.534954    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:47.535056    4013 provision.go:143] copyHostCerts
	I0805 16:08:47.535084    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:08:47.535151    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:08:47.535160    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:08:47.535302    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:08:47.535496    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:08:47.535537    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:08:47.535561    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:08:47.535642    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:08:47.535782    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:08:47.535820    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:08:47.535825    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:08:47.535901    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:08:47.536041    4013 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.ha-968000 san=[127.0.0.1 192.169.0.5 ha-968000 localhost minikube]
	I0805 16:08:47.710785    4013 provision.go:177] copyRemoteCerts
	I0805 16:08:47.710840    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:08:47.710858    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:47.710996    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:47.711136    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.711274    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:47.711374    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/id_rsa Username:docker}
	I0805 16:08:47.750129    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:08:47.750206    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:08:47.771089    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:08:47.771160    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0805 16:08:47.789876    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:08:47.789938    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:08:47.809484    4013 provision.go:87] duration metric: took 274.74692ms to configureAuth
	I0805 16:08:47.809497    4013 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:08:47.809670    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:08:47.809683    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:08:47.809829    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:47.809915    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:47.810002    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.810076    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.810154    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:47.810265    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:08:47.810397    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0805 16:08:47.810405    4013 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:08:47.878284    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:08:47.878296    4013 buildroot.go:70] root file system type: tmpfs
	I0805 16:08:47.878387    4013 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:08:47.878399    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:47.878536    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:47.878623    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.878711    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.878808    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:47.878940    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:08:47.879074    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0805 16:08:47.879122    4013 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:08:47.957253    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:08:47.957278    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:47.957421    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:47.957524    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.957614    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.957714    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:47.957844    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:08:47.957985    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0805 16:08:47.957996    4013 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:08:49.653715    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:08:49.653732    4013 machine.go:97] duration metric: took 13.353812952s to provisionDockerMachine
	I0805 16:08:49.653746    4013 start.go:293] postStartSetup for "ha-968000" (driver="hyperkit")
	I0805 16:08:49.653760    4013 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:08:49.653771    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:08:49.653973    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:08:49.653990    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:49.654090    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:49.654219    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:49.654313    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:49.654396    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/id_rsa Username:docker}
	I0805 16:08:49.695524    4013 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:08:49.698720    4013 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:08:49.698734    4013 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:08:49.698825    4013 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:08:49.699014    4013 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:08:49.699020    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:08:49.699239    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:08:49.707453    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:08:49.726493    4013 start.go:296] duration metric: took 72.739242ms for postStartSetup
	I0805 16:08:49.726518    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:08:49.726678    4013 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0805 16:08:49.726689    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:49.726778    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:49.726859    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:49.726953    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:49.727030    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/id_rsa Username:docker}
	I0805 16:08:49.773612    4013 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0805 16:08:49.773669    4013 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0805 16:08:49.839587    4013 fix.go:56] duration metric: took 13.752613014s for fixHost
	I0805 16:08:49.839610    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:49.839781    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:49.839886    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:49.839982    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:49.840087    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:49.840208    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:08:49.840351    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0805 16:08:49.840358    4013 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 16:08:49.909831    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722899330.049194417
	
	I0805 16:08:49.909843    4013 fix.go:216] guest clock: 1722899330.049194417
	I0805 16:08:49.909849    4013 fix.go:229] Guest: 2024-08-05 16:08:50.049194417 -0700 PDT Remote: 2024-08-05 16:08:49.8396 -0700 PDT m=+14.197025337 (delta=209.594417ms)
	I0805 16:08:49.909866    4013 fix.go:200] guest clock delta is within tolerance: 209.594417ms
	I0805 16:08:49.909870    4013 start.go:83] releasing machines lock for "ha-968000", held for 13.822941144s
	I0805 16:08:49.909890    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:08:49.910020    4013 main.go:141] libmachine: (ha-968000) Calling .GetIP
	I0805 16:08:49.910132    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:08:49.910474    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:08:49.910586    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:08:49.910664    4013 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:08:49.910695    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:49.910746    4013 ssh_runner.go:195] Run: cat /version.json
	I0805 16:08:49.910757    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:49.910786    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:49.910854    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:49.910893    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:49.910967    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:49.910992    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:49.911086    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:49.911105    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/id_rsa Username:docker}
	I0805 16:08:49.911177    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/id_rsa Username:docker}
	I0805 16:08:49.948334    4013 ssh_runner.go:195] Run: systemctl --version
	I0805 16:08:49.997557    4013 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 16:08:50.001927    4013 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:08:50.001971    4013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:08:50.014441    4013 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:08:50.014455    4013 start.go:495] detecting cgroup driver to use...
	I0805 16:08:50.014568    4013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:08:50.030880    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:08:50.040000    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:08:50.048917    4013 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:08:50.048956    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:08:50.058052    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:08:50.067040    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:08:50.075877    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:08:50.084739    4013 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:08:50.093910    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:08:50.102684    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:08:50.111468    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:08:50.120485    4013 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:08:50.128670    4013 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:08:50.136701    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:08:50.239872    4013 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:08:50.259056    4013 start.go:495] detecting cgroup driver to use...
	I0805 16:08:50.259134    4013 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:08:50.276716    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:08:50.288092    4013 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:08:50.305475    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:08:50.315851    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:08:50.325889    4013 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:08:50.345027    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:08:50.355226    4013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:08:50.370181    4013 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:08:50.373242    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:08:50.380619    4013 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:08:50.394005    4013 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:08:50.490673    4013 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:08:50.595291    4013 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:08:50.595364    4013 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:08:50.609503    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:08:50.704344    4013 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:08:53.027644    4013 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.323281261s)
	I0805 16:08:53.027701    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0805 16:08:53.038843    4013 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0805 16:08:53.053238    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:08:53.063556    4013 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0805 16:08:53.166406    4013 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0805 16:08:53.281072    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:08:53.386855    4013 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0805 16:08:53.400726    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:08:53.412004    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:08:53.527406    4013 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0805 16:08:53.592203    4013 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0805 16:08:53.592286    4013 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0805 16:08:53.596745    4013 start.go:563] Will wait 60s for crictl version
	I0805 16:08:53.596797    4013 ssh_runner.go:195] Run: which crictl
	I0805 16:08:53.600648    4013 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 16:08:53.626561    4013 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0805 16:08:53.626630    4013 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:08:53.645043    4013 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:08:53.705589    4013 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0805 16:08:53.705632    4013 main.go:141] libmachine: (ha-968000) Calling .GetIP
	I0805 16:08:53.705996    4013 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0805 16:08:53.710588    4013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:08:53.720355    4013 kubeadm.go:883] updating cluster {Name:ha-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:ha-968000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 16:08:53.720443    4013 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:08:53.720494    4013 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:08:53.733778    4013 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240730-75a5af0c
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0805 16:08:53.733792    4013 docker.go:615] Images already preloaded, skipping extraction
	I0805 16:08:53.733871    4013 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:08:53.750560    4013 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240730-75a5af0c
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0805 16:08:53.750581    4013 cache_images.go:84] Images are preloaded, skipping loading
	I0805 16:08:53.750593    4013 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.30.3 docker true true} ...
	I0805 16:08:53.750678    4013 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-968000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-968000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 16:08:53.750747    4013 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0805 16:08:53.787873    4013 cni.go:84] Creating CNI manager for ""
	I0805 16:08:53.787890    4013 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0805 16:08:53.787901    4013 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 16:08:53.787917    4013 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-968000 NodeName:ha-968000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 16:08:53.787998    4013 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-968000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 16:08:53.788013    4013 kube-vip.go:115] generating kube-vip config ...
	I0805 16:08:53.788070    4013 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0805 16:08:53.800656    4013 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0805 16:08:53.800732    4013 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0805 16:08:53.800782    4013 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 16:08:53.809476    4013 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 16:08:53.809517    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0805 16:08:53.816818    4013 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0805 16:08:53.830799    4013 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 16:08:53.844236    4013 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0805 16:08:53.858097    4013 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0805 16:08:53.871426    4013 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0805 16:08:53.874277    4013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:08:53.883655    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:08:53.988496    4013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:08:54.003102    4013 certs.go:68] Setting up /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000 for IP: 192.169.0.5
	I0805 16:08:54.003116    4013 certs.go:194] generating shared ca certs ...
	I0805 16:08:54.003129    4013 certs.go:226] acquiring lock for ca certs: {Name:mkb83e058d89c7d4e66f4136f377a3c305b13735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:08:54.003311    4013 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key
	I0805 16:08:54.003384    4013 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key
	I0805 16:08:54.003396    4013 certs.go:256] generating profile certs ...
	I0805 16:08:54.003511    4013 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/client.key
	I0805 16:08:54.003533    4013 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key.e79882c6
	I0805 16:08:54.003547    4013 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt.e79882c6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0805 16:08:54.115170    4013 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt.e79882c6 ...
	I0805 16:08:54.115186    4013 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt.e79882c6: {Name:mk08e7d67872e7bcbb9c4a5ebb3c1a0585610c24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:08:54.115545    4013 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key.e79882c6 ...
	I0805 16:08:54.115555    4013 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key.e79882c6: {Name:mk05314b1c47ab3f7e3ebdc93ec7e7e8886a1b84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:08:54.115785    4013 certs.go:381] copying /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt.e79882c6 -> /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt
	I0805 16:08:54.116009    4013 certs.go:385] copying /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key.e79882c6 -> /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key
	I0805 16:08:54.116270    4013 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.key
	I0805 16:08:54.116285    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 16:08:54.116311    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 16:08:54.116333    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 16:08:54.116355    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 16:08:54.116375    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 16:08:54.116396    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 16:08:54.116416    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 16:08:54.116436    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 16:08:54.116538    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem (1338 bytes)
	W0805 16:08:54.116595    4013 certs.go:480] ignoring /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678_empty.pem, impossibly tiny 0 bytes
	I0805 16:08:54.116605    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 16:08:54.116642    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem (1082 bytes)
	I0805 16:08:54.116678    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem (1123 bytes)
	I0805 16:08:54.116714    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem (1675 bytes)
	I0805 16:08:54.116792    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:08:54.116828    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem -> /usr/share/ca-certificates/1678.pem
	I0805 16:08:54.116855    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /usr/share/ca-certificates/16782.pem
	I0805 16:08:54.116877    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:08:54.117335    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 16:08:54.150739    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 16:08:54.186504    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 16:08:54.226561    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0805 16:08:54.269928    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0805 16:08:54.303048    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 16:08:54.323374    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 16:08:54.342974    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 16:08:54.363396    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem --> /usr/share/ca-certificates/1678.pem (1338 bytes)
	I0805 16:08:54.383241    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /usr/share/ca-certificates/16782.pem (1708 bytes)
	I0805 16:08:54.402950    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 16:08:54.422603    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 16:08:54.436211    4013 ssh_runner.go:195] Run: openssl version
	I0805 16:08:54.440410    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1678.pem && ln -fs /usr/share/ca-certificates/1678.pem /etc/ssl/certs/1678.pem"
	I0805 16:08:54.448686    4013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1678.pem
	I0805 16:08:54.452045    4013 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 22:58 /usr/share/ca-certificates/1678.pem
	I0805 16:08:54.452085    4013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1678.pem
	I0805 16:08:54.456273    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1678.pem /etc/ssl/certs/51391683.0"
	I0805 16:08:54.464533    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16782.pem && ln -fs /usr/share/ca-certificates/16782.pem /etc/ssl/certs/16782.pem"
	I0805 16:08:54.472739    4013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16782.pem
	I0805 16:08:54.476114    4013 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 22:58 /usr/share/ca-certificates/16782.pem
	I0805 16:08:54.476150    4013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16782.pem
	I0805 16:08:54.480401    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16782.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 16:08:54.488643    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 16:08:54.496792    4013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:08:54.500141    4013 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:08:54.500183    4013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:08:54.504411    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 16:08:54.512563    4013 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 16:08:54.516172    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 16:08:54.520959    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 16:08:54.525326    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 16:08:54.530085    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 16:08:54.534367    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 16:08:54.538835    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 16:08:54.543179    4013 kubeadm.go:392] StartCluster: {Name:ha-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:ha-968000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:08:54.543300    4013 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 16:08:54.556340    4013 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 16:08:54.563823    4013 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 16:08:54.563834    4013 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 16:08:54.563876    4013 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 16:08:54.571534    4013 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 16:08:54.571871    4013 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-968000" does not appear in /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:08:54.571963    4013 kubeconfig.go:62] /Users/jenkins/minikube-integration/19373-1122/kubeconfig needs updating (will repair): [kubeconfig missing "ha-968000" cluster setting kubeconfig missing "ha-968000" context setting]
	I0805 16:08:54.572632    4013 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/kubeconfig: {Name:mk2a0d8b4d330b3c26432fc65d015ddf98a9cc93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:08:54.573442    4013 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:08:54.573629    4013 kapi.go:59] client config for ha-968000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x85c5060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:08:54.573946    4013 cert_rotation.go:137] Starting client certificate rotation controller
	I0805 16:08:54.574116    4013 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 16:08:54.581700    4013 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0805 16:08:54.581717    4013 kubeadm.go:597] duration metric: took 17.878919ms to restartPrimaryControlPlane
	I0805 16:08:54.581733    4013 kubeadm.go:394] duration metric: took 38.554869ms to StartCluster
	I0805 16:08:54.581748    4013 settings.go:142] acquiring lock: {Name:mk564a817a54ecf2aef16a4d2309e85208c0231f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:08:54.581853    4013 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:08:54.582215    4013 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/kubeconfig: {Name:mk2a0d8b4d330b3c26432fc65d015ddf98a9cc93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:08:54.582428    4013 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:08:54.582441    4013 start.go:241] waiting for startup goroutines ...
	I0805 16:08:54.582452    4013 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 16:08:54.582577    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:08:54.626035    4013 out.go:177] * Enabled addons: 
	I0805 16:08:54.646951    4013 addons.go:510] duration metric: took 64.498286ms for enable addons: enabled=[]
	I0805 16:08:54.646991    4013 start.go:246] waiting for cluster config update ...
	I0805 16:08:54.647007    4013 start.go:255] writing updated cluster config ...
	I0805 16:08:54.669067    4013 out.go:177] 
	I0805 16:08:54.690499    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:08:54.690643    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:08:54.713097    4013 out.go:177] * Starting "ha-968000-m02" control-plane node in "ha-968000" cluster
	I0805 16:08:54.754948    4013 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:08:54.755014    4013 cache.go:56] Caching tarball of preloaded images
	I0805 16:08:54.755180    4013 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:08:54.755198    4013 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:08:54.755327    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:08:54.756294    4013 start.go:360] acquireMachinesLock for ha-968000-m02: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:08:54.756399    4013 start.go:364] duration metric: took 80.734µs to acquireMachinesLock for "ha-968000-m02"
	I0805 16:08:54.756425    4013 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:08:54.756433    4013 fix.go:54] fixHost starting: m02
	I0805 16:08:54.756872    4013 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:08:54.756903    4013 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:08:54.766304    4013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51908
	I0805 16:08:54.766655    4013 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:08:54.766978    4013 main.go:141] libmachine: Using API Version  1
	I0805 16:08:54.766996    4013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:08:54.767193    4013 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:08:54.767300    4013 main.go:141] libmachine: (ha-968000-m02) Calling .DriverName
	I0805 16:08:54.767383    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetState
	I0805 16:08:54.767464    4013 main.go:141] libmachine: (ha-968000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:08:54.767541    4013 main.go:141] libmachine: (ha-968000-m02) DBG | hyperkit pid from json: 3958
	I0805 16:08:54.768456    4013 main.go:141] libmachine: (ha-968000-m02) DBG | hyperkit pid 3958 missing from process table
	I0805 16:08:54.768475    4013 fix.go:112] recreateIfNeeded on ha-968000-m02: state=Stopped err=<nil>
	I0805 16:08:54.768483    4013 main.go:141] libmachine: (ha-968000-m02) Calling .DriverName
	W0805 16:08:54.768562    4013 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:08:54.811088    4013 out.go:177] * Restarting existing hyperkit VM for "ha-968000-m02" ...
	I0805 16:08:54.832129    4013 main.go:141] libmachine: (ha-968000-m02) Calling .Start
	I0805 16:08:54.832449    4013 main.go:141] libmachine: (ha-968000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:08:54.832594    4013 main.go:141] libmachine: (ha-968000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/hyperkit.pid
	I0805 16:08:54.834273    4013 main.go:141] libmachine: (ha-968000-m02) DBG | hyperkit pid 3958 missing from process table
	I0805 16:08:54.834290    4013 main.go:141] libmachine: (ha-968000-m02) DBG | pid 3958 is in state "Stopped"
	I0805 16:08:54.834314    4013 main.go:141] libmachine: (ha-968000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/hyperkit.pid...
	I0805 16:08:54.834555    4013 main.go:141] libmachine: (ha-968000-m02) DBG | Using UUID fe2b7178-e807-4f71-b597-390ca402ab71
	I0805 16:08:54.862624    4013 main.go:141] libmachine: (ha-968000-m02) DBG | Generated MAC b2:64:5d:40:b:b5
	I0805 16:08:54.862655    4013 main.go:141] libmachine: (ha-968000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000
	I0805 16:08:54.862830    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"fe2b7178-e807-4f71-b597-390ca402ab71", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aaa20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:08:54.862873    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"fe2b7178-e807-4f71-b597-390ca402ab71", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aaa20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:08:54.862907    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "fe2b7178-e807-4f71-b597-390ca402ab71", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/ha-968000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machine
s/ha-968000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000"}
	I0805 16:08:54.862951    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U fe2b7178-e807-4f71-b597-390ca402ab71 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/ha-968000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000"
	I0805 16:08:54.862972    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:08:54.864230    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 DEBUG: hyperkit: Pid is 4036
	I0805 16:08:54.864617    4013 main.go:141] libmachine: (ha-968000-m02) DBG | Attempt 0
	I0805 16:08:54.864628    4013 main.go:141] libmachine: (ha-968000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:08:54.864712    4013 main.go:141] libmachine: (ha-968000-m02) DBG | hyperkit pid from json: 4036
	I0805 16:08:54.866673    4013 main.go:141] libmachine: (ha-968000-m02) DBG | Searching for b2:64:5d:40:b:b5 in /var/db/dhcpd_leases ...
	I0805 16:08:54.866730    4013 main.go:141] libmachine: (ha-968000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0805 16:08:54.866746    4013 main.go:141] libmachine: (ha-968000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2acfd}
	I0805 16:08:54.866756    4013 main.go:141] libmachine: (ha-968000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b15b5a}
	I0805 16:08:54.866763    4013 main.go:141] libmachine: (ha-968000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2acb6}
	I0805 16:08:54.866779    4013 main.go:141] libmachine: (ha-968000-m02) DBG | Found match: b2:64:5d:40:b:b5
	I0805 16:08:54.866785    4013 main.go:141] libmachine: (ha-968000-m02) DBG | IP: 192.169.0.6
	I0805 16:08:54.866826    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetConfigRaw
	I0805 16:08:54.867497    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetIP
	I0805 16:08:54.867687    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:08:54.868091    4013 machine.go:94] provisionDockerMachine start ...
	I0805 16:08:54.868103    4013 main.go:141] libmachine: (ha-968000-m02) Calling .DriverName
	I0805 16:08:54.868265    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:08:54.868366    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:08:54.868470    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:08:54.868561    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:08:54.868654    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:08:54.868809    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:08:54.868963    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0805 16:08:54.868973    4013 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 16:08:54.872068    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:08:54.880205    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:08:54.881201    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:08:54.881214    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:08:54.881243    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:08:54.881257    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:08:55.265892    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:55 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:08:55.265907    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:55 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:08:55.380667    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:08:55.380687    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:08:55.380695    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:08:55.380701    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:08:55.381533    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:55 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:08:55.381546    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:55 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:09:00.973735    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:09:00 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:09:00.973856    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:09:00 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:09:00.973866    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:09:00 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:09:00.997819    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:09:00 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:09:05.931816    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 16:09:05.931831    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetMachineName
	I0805 16:09:05.931997    4013 buildroot.go:166] provisioning hostname "ha-968000-m02"
	I0805 16:09:05.932009    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetMachineName
	I0805 16:09:05.932102    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:05.932202    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:05.932286    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:05.932365    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:05.932456    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:05.932575    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:09:05.932721    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0805 16:09:05.932729    4013 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-968000-m02 && echo "ha-968000-m02" | sudo tee /etc/hostname
	I0805 16:09:05.993192    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-968000-m02
	
	I0805 16:09:05.993215    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:05.993338    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:05.993436    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:05.993511    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:05.993594    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:05.993723    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:09:05.993859    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0805 16:09:05.993871    4013 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-968000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-968000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-968000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:09:06.050566    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:09:06.050581    4013 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:09:06.050591    4013 buildroot.go:174] setting up certificates
	I0805 16:09:06.050596    4013 provision.go:84] configureAuth start
	I0805 16:09:06.050603    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetMachineName
	I0805 16:09:06.050733    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetIP
	I0805 16:09:06.050844    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:06.050935    4013 provision.go:143] copyHostCerts
	I0805 16:09:06.050963    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:09:06.051010    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:09:06.051016    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:09:06.051159    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:09:06.051373    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:09:06.051403    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:09:06.051408    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:09:06.051520    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:09:06.051663    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:09:06.051692    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:09:06.051697    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:09:06.051762    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:09:06.051905    4013 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.ha-968000-m02 san=[127.0.0.1 192.169.0.6 ha-968000-m02 localhost minikube]
	I0805 16:09:06.144117    4013 provision.go:177] copyRemoteCerts
	I0805 16:09:06.144168    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:09:06.144182    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:06.144315    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:06.144419    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:06.144519    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:06.144605    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/id_rsa Username:docker}
	I0805 16:09:06.177583    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:09:06.177652    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:09:06.196674    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:09:06.196731    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:09:06.215833    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:09:06.215904    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0805 16:09:06.234708    4013 provision.go:87] duration metric: took 184.105335ms to configureAuth
	I0805 16:09:06.234721    4013 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:09:06.234888    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:09:06.234902    4013 main.go:141] libmachine: (ha-968000-m02) Calling .DriverName
	I0805 16:09:06.235034    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:06.235129    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:06.235219    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:06.235306    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:06.235377    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:06.235486    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:09:06.235620    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0805 16:09:06.235627    4013 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:09:06.286203    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:09:06.286215    4013 buildroot.go:70] root file system type: tmpfs
	I0805 16:09:06.286297    4013 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:09:06.286308    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:06.286429    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:06.286523    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:06.286613    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:06.286698    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:06.286817    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:09:06.286956    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0805 16:09:06.287002    4013 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:09:06.347900    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:09:06.347916    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:06.348060    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:06.348168    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:06.348290    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:06.348380    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:06.348531    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:09:06.348709    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0805 16:09:06.348724    4013 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:09:07.986428    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:09:07.986451    4013 machine.go:97] duration metric: took 13.118346339s to provisionDockerMachine
	I0805 16:09:07.986459    4013 start.go:293] postStartSetup for "ha-968000-m02" (driver="hyperkit")
	I0805 16:09:07.986469    4013 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:09:07.986480    4013 main.go:141] libmachine: (ha-968000-m02) Calling .DriverName
	I0805 16:09:07.986670    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:09:07.986681    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:07.986783    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:07.986882    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:07.986962    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:07.987053    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/id_rsa Username:docker}
	I0805 16:09:08.025708    4013 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:09:08.030674    4013 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:09:08.030690    4013 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:09:08.030788    4013 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:09:08.030933    4013 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:09:08.030940    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:09:08.031094    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:09:08.040549    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:09:08.073731    4013 start.go:296] duration metric: took 87.255709ms for postStartSetup
	I0805 16:09:08.073758    4013 main.go:141] libmachine: (ha-968000-m02) Calling .DriverName
	I0805 16:09:08.073944    4013 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0805 16:09:08.073958    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:08.074051    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:08.074132    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:08.074215    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:08.074303    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/id_rsa Username:docker}
	I0805 16:09:08.106482    4013 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0805 16:09:08.106540    4013 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0805 16:09:08.160338    4013 fix.go:56] duration metric: took 13.403896455s for fixHost
	I0805 16:09:08.160384    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:08.160527    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:08.160625    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:08.160714    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:08.160794    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:08.160927    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:09:08.161086    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0805 16:09:08.161094    4013 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 16:09:08.212458    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722899348.353849181
	
	I0805 16:09:08.212468    4013 fix.go:216] guest clock: 1722899348.353849181
	I0805 16:09:08.212476    4013 fix.go:229] Guest: 2024-08-05 16:09:08.353849181 -0700 PDT Remote: 2024-08-05 16:09:08.160354 -0700 PDT m=+32.517773342 (delta=193.495181ms)
	I0805 16:09:08.212487    4013 fix.go:200] guest clock delta is within tolerance: 193.495181ms
	I0805 16:09:08.212490    4013 start.go:83] releasing machines lock for "ha-968000-m02", held for 13.45607681s
	I0805 16:09:08.212505    4013 main.go:141] libmachine: (ha-968000-m02) Calling .DriverName
	I0805 16:09:08.212639    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetIP
	I0805 16:09:08.235368    4013 out.go:177] * Found network options:
	I0805 16:09:08.255968    4013 out.go:177]   - NO_PROXY=192.169.0.5
	W0805 16:09:08.277055    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:09:08.277126    4013 main.go:141] libmachine: (ha-968000-m02) Calling .DriverName
	I0805 16:09:08.277962    4013 main.go:141] libmachine: (ha-968000-m02) Calling .DriverName
	I0805 16:09:08.278232    4013 main.go:141] libmachine: (ha-968000-m02) Calling .DriverName
	I0805 16:09:08.278363    4013 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:09:08.278403    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	W0805 16:09:08.278441    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:09:08.278542    4013 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 16:09:08.278561    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:08.278609    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:08.278735    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:08.278828    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:08.278924    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:08.279039    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:08.279094    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:08.279296    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/id_rsa Username:docker}
	I0805 16:09:08.279328    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/id_rsa Username:docker}
	W0805 16:09:08.308476    4013 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:09:08.308543    4013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:09:08.366966    4013 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:09:08.366989    4013 start.go:495] detecting cgroup driver to use...
	I0805 16:09:08.367106    4013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:09:08.383096    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:09:08.391318    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:09:08.399437    4013 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:09:08.399485    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:09:08.407713    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:09:08.415945    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:09:08.424060    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:09:08.432199    4013 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:09:08.440635    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:09:08.449476    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:09:08.457693    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:09:08.465963    4013 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:09:08.473316    4013 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:09:08.480715    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:09:08.580965    4013 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:09:08.599460    4013 start.go:495] detecting cgroup driver to use...
	I0805 16:09:08.599526    4013 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:09:08.618244    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:09:08.628953    4013 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:09:08.643835    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:09:08.654207    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:09:08.667243    4013 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:09:08.688662    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:09:08.699359    4013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:09:08.714408    4013 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:09:08.717488    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:09:08.724576    4013 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:09:08.738058    4013 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:09:08.841454    4013 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:09:08.945955    4013 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:09:08.945979    4013 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:09:08.960827    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:09:09.064765    4013 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:09:11.412428    4013 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.347643222s)
	I0805 16:09:11.412491    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0805 16:09:11.422964    4013 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0805 16:09:11.435663    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:09:11.446013    4013 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0805 16:09:11.539337    4013 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0805 16:09:11.650058    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:09:11.748634    4013 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0805 16:09:11.762213    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:09:11.773039    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:09:11.872006    4013 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0805 16:09:11.939388    4013 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0805 16:09:11.939480    4013 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0805 16:09:11.943952    4013 start.go:563] Will wait 60s for crictl version
	I0805 16:09:11.944006    4013 ssh_runner.go:195] Run: which crictl
	I0805 16:09:11.947391    4013 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 16:09:11.980231    4013 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0805 16:09:11.980302    4013 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:09:11.997853    4013 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:09:12.060154    4013 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0805 16:09:12.080904    4013 out.go:177]   - env NO_PROXY=192.169.0.5
	I0805 16:09:12.102334    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetIP
	I0805 16:09:12.102720    4013 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0805 16:09:12.107517    4013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:09:12.117349    4013 mustload.go:65] Loading cluster: ha-968000
	I0805 16:09:12.117532    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:09:12.117765    4013 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:09:12.117781    4013 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:09:12.126279    4013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51930
	I0805 16:09:12.126593    4013 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:09:12.126941    4013 main.go:141] libmachine: Using API Version  1
	I0805 16:09:12.126959    4013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:09:12.127183    4013 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:09:12.127284    4013 main.go:141] libmachine: (ha-968000) Calling .GetState
	I0805 16:09:12.127369    4013 main.go:141] libmachine: (ha-968000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:09:12.127424    4013 main.go:141] libmachine: (ha-968000) DBG | hyperkit pid from json: 4025
	I0805 16:09:12.128374    4013 host.go:66] Checking if "ha-968000" exists ...
	I0805 16:09:12.128663    4013 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:09:12.128678    4013 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:09:12.137093    4013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51932
	I0805 16:09:12.137400    4013 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:09:12.137721    4013 main.go:141] libmachine: Using API Version  1
	I0805 16:09:12.137731    4013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:09:12.137942    4013 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:09:12.138052    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:09:12.138149    4013 certs.go:68] Setting up /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000 for IP: 192.169.0.6
	I0805 16:09:12.138156    4013 certs.go:194] generating shared ca certs ...
	I0805 16:09:12.138169    4013 certs.go:226] acquiring lock for ca certs: {Name:mkb83e058d89c7d4e66f4136f377a3c305b13735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:09:12.138309    4013 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key
	I0805 16:09:12.138365    4013 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key
	I0805 16:09:12.138373    4013 certs.go:256] generating profile certs ...
	I0805 16:09:12.138477    4013 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/client.key
	I0805 16:09:12.138565    4013 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key.77dc068d
	I0805 16:09:12.138631    4013 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.key
	I0805 16:09:12.138639    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 16:09:12.138660    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 16:09:12.138681    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 16:09:12.138700    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 16:09:12.138717    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 16:09:12.138735    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 16:09:12.138754    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 16:09:12.138776    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 16:09:12.138855    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem (1338 bytes)
	W0805 16:09:12.138895    4013 certs.go:480] ignoring /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678_empty.pem, impossibly tiny 0 bytes
	I0805 16:09:12.138904    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 16:09:12.138940    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem (1082 bytes)
	I0805 16:09:12.138974    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem (1123 bytes)
	I0805 16:09:12.139009    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem (1675 bytes)
	I0805 16:09:12.139074    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:09:12.139106    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:09:12.139125    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem -> /usr/share/ca-certificates/1678.pem
	I0805 16:09:12.139142    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /usr/share/ca-certificates/16782.pem
	I0805 16:09:12.139167    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:09:12.139259    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:09:12.139346    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:09:12.139430    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:09:12.139498    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/id_rsa Username:docker}
	I0805 16:09:12.171916    4013 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0805 16:09:12.175290    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0805 16:09:12.184095    4013 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0805 16:09:12.187128    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0805 16:09:12.195868    4013 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0805 16:09:12.198915    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0805 16:09:12.208072    4013 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0805 16:09:12.211239    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0805 16:09:12.220236    4013 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0805 16:09:12.223357    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0805 16:09:12.231812    4013 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0805 16:09:12.234916    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0805 16:09:12.243760    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 16:09:12.264594    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 16:09:12.284204    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 16:09:12.304172    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0805 16:09:12.324282    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0805 16:09:12.344243    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 16:09:12.363682    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 16:09:12.383391    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 16:09:12.403042    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 16:09:12.422963    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem --> /usr/share/ca-certificates/1678.pem (1338 bytes)
	I0805 16:09:12.442422    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /usr/share/ca-certificates/16782.pem (1708 bytes)
	I0805 16:09:12.462071    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0805 16:09:12.476035    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0805 16:09:12.489609    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0805 16:09:12.502965    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0805 16:09:12.516617    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0805 16:09:12.530178    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0805 16:09:12.543803    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0805 16:09:12.557186    4013 ssh_runner.go:195] Run: openssl version
	I0805 16:09:12.561690    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1678.pem && ln -fs /usr/share/ca-certificates/1678.pem /etc/ssl/certs/1678.pem"
	I0805 16:09:12.570469    4013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1678.pem
	I0805 16:09:12.573916    4013 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 22:58 /usr/share/ca-certificates/1678.pem
	I0805 16:09:12.573968    4013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1678.pem
	I0805 16:09:12.578325    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1678.pem /etc/ssl/certs/51391683.0"
	I0805 16:09:12.586655    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16782.pem && ln -fs /usr/share/ca-certificates/16782.pem /etc/ssl/certs/16782.pem"
	I0805 16:09:12.595266    4013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16782.pem
	I0805 16:09:12.598773    4013 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 22:58 /usr/share/ca-certificates/16782.pem
	I0805 16:09:12.598808    4013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16782.pem
	I0805 16:09:12.603106    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16782.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 16:09:12.611770    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 16:09:12.620276    4013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:09:12.623836    4013 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:09:12.623874    4013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:09:12.628099    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 16:09:12.636558    4013 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 16:09:12.640104    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 16:09:12.644367    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 16:09:12.648558    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 16:09:12.653002    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 16:09:12.657413    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 16:09:12.661571    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 16:09:12.665817    4013 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.30.3 docker true true} ...
	I0805 16:09:12.665880    4013 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-968000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-968000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 16:09:12.665898    4013 kube-vip.go:115] generating kube-vip config ...
	I0805 16:09:12.665932    4013 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0805 16:09:12.678633    4013 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0805 16:09:12.678672    4013 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0805 16:09:12.678725    4013 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 16:09:12.686682    4013 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 16:09:12.686732    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0805 16:09:12.694235    4013 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0805 16:09:12.708178    4013 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 16:09:12.721592    4013 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0805 16:09:12.735241    4013 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0805 16:09:12.738251    4013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:09:12.747938    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:09:12.839333    4013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:09:12.855307    4013 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:09:12.855486    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:09:12.876653    4013 out.go:177] * Verifying Kubernetes components...
	I0805 16:09:12.918406    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:09:13.043139    4013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:09:13.061746    4013 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:09:13.061950    4013 kapi.go:59] client config for ha-968000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x85c5060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0805 16:09:13.061990    4013 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0805 16:09:13.062163    4013 node_ready.go:35] waiting up to 6m0s for node "ha-968000-m02" to be "Ready" ...
	I0805 16:09:13.062248    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:09:13.062253    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:13.062261    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:13.062265    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.259366    4013 round_trippers.go:574] Response Status: 200 OK in 8197 milliseconds
	I0805 16:09:21.260575    4013 node_ready.go:49] node "ha-968000-m02" has status "Ready":"True"
	I0805 16:09:21.260589    4013 node_ready.go:38] duration metric: took 8.198406493s for node "ha-968000-m02" to be "Ready" ...
	I0805 16:09:21.260596    4013 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:09:21.260646    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0805 16:09:21.260653    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.260660    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.260665    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.302891    4013 round_trippers.go:574] Response Status: 200 OK in 42 milliseconds
	I0805 16:09:21.310518    4013 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hjp5z" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.310596    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hjp5z
	I0805 16:09:21.310619    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.310632    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.310639    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.313152    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:21.313881    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:21.313892    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.313899    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.313902    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.317700    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:21.318187    4013 pod_ready.go:92] pod "coredns-7db6d8ff4d-hjp5z" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:21.318198    4013 pod_ready.go:81] duration metric: took 7.662792ms for pod "coredns-7db6d8ff4d-hjp5z" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.318207    4013 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.318250    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:09:21.318256    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.318263    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.318268    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.326180    4013 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0805 16:09:21.326741    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:21.326750    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.326758    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.326763    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.331849    4013 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0805 16:09:21.332344    4013 pod_ready.go:92] pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:21.332356    4013 pod_ready.go:81] duration metric: took 14.143254ms for pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.332364    4013 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.332409    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-968000
	I0805 16:09:21.332416    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.332423    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.332426    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.335622    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:21.335995    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:21.336004    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.336019    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.336025    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.339965    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:21.340276    4013 pod_ready.go:92] pod "etcd-ha-968000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:21.340287    4013 pod_ready.go:81] duration metric: took 7.918315ms for pod "etcd-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.340295    4013 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.340346    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-968000-m02
	I0805 16:09:21.340352    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.340359    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.340365    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.342503    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:21.343015    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:09:21.343024    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.343031    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.343036    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.346019    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:21.346517    4013 pod_ready.go:92] pod "etcd-ha-968000-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:21.346530    4013 pod_ready.go:81] duration metric: took 6.229187ms for pod "etcd-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.346558    4013 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.346618    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-968000-m03
	I0805 16:09:21.346625    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.346633    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.346638    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.351435    4013 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 16:09:21.461654    4013 request.go:629] Waited for 109.640417ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:09:21.461696    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:09:21.461703    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.461709    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.461715    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.465496    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:21.465774    4013 pod_ready.go:92] pod "etcd-ha-968000-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:21.465784    4013 pod_ready.go:81] duration metric: took 119.216409ms for pod "etcd-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.465817    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.661090    4013 request.go:629] Waited for 195.188408ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-968000
	I0805 16:09:21.661122    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-968000
	I0805 16:09:21.661127    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.661133    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.661136    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.663700    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:21.860705    4013 request.go:629] Waited for 196.382714ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:21.860744    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:21.860750    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.860758    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.860764    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.864103    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:21.864428    4013 pod_ready.go:92] pod "kube-apiserver-ha-968000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:21.864438    4013 pod_ready.go:81] duration metric: took 398.612841ms for pod "kube-apiserver-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.864448    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:22.062331    4013 request.go:629] Waited for 197.82051ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-968000-m02
	I0805 16:09:22.062511    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-968000-m02
	I0805 16:09:22.062523    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:22.062533    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:22.062539    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:22.065766    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:22.262057    4013 request.go:629] Waited for 195.681075ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:09:22.262125    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:09:22.262130    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:22.262137    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:22.262140    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:22.264946    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:22.265310    4013 pod_ready.go:92] pod "kube-apiserver-ha-968000-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:22.265318    4013 pod_ready.go:81] duration metric: took 400.862554ms for pod "kube-apiserver-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:22.265325    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:22.460707    4013 request.go:629] Waited for 195.347101ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-968000-m03
	I0805 16:09:22.460759    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-968000-m03
	I0805 16:09:22.460765    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:22.460781    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:22.460785    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:22.464130    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:22.660697    4013 request.go:629] Waited for 196.193657ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:09:22.660729    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:09:22.660736    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:22.660779    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:22.660812    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:22.662931    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:22.663458    4013 pod_ready.go:92] pod "kube-apiserver-ha-968000-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:22.663468    4013 pod_ready.go:81] duration metric: took 398.13793ms for pod "kube-apiserver-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:22.663475    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:22.861064    4013 request.go:629] Waited for 197.549417ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000
	I0805 16:09:22.861116    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000
	I0805 16:09:22.861124    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:22.861131    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:22.861137    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:22.863357    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:23.060775    4013 request.go:629] Waited for 196.997441ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:23.060838    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:23.060844    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:23.060850    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:23.060854    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:23.062638    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:09:23.062947    4013 pod_ready.go:92] pod "kube-controller-manager-ha-968000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:23.062956    4013 pod_ready.go:81] duration metric: took 399.47493ms for pod "kube-controller-manager-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:23.062963    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:23.262182    4013 request.go:629] Waited for 199.175443ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m02
	I0805 16:09:23.262278    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m02
	I0805 16:09:23.262289    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:23.262301    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:23.262309    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:23.265274    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:23.460721    4013 request.go:629] Waited for 194.890215ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:09:23.460750    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:09:23.460755    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:23.460761    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:23.460766    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:23.462860    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:23.463267    4013 pod_ready.go:97] node "ha-968000-m02" hosting pod "kube-controller-manager-ha-968000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-968000-m02" has status "Ready":"False"
	I0805 16:09:23.463277    4013 pod_ready.go:81] duration metric: took 400.308105ms for pod "kube-controller-manager-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	E0805 16:09:23.463284    4013 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-968000-m02" hosting pod "kube-controller-manager-ha-968000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-968000-m02" has status "Ready":"False"
	I0805 16:09:23.463290    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:23.662538    4013 request.go:629] Waited for 199.207212ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m03
	I0805 16:09:23.662619    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m03
	I0805 16:09:23.662625    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:23.662631    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:23.662635    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:23.664768    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:23.861796    4013 request.go:629] Waited for 196.439694ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:09:23.861935    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:09:23.861946    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:23.861956    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:23.861962    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:23.865458    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:23.865815    4013 pod_ready.go:92] pod "kube-controller-manager-ha-968000-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:23.865826    4013 pod_ready.go:81] duration metric: took 402.529289ms for pod "kube-controller-manager-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:23.865833    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fvd5q" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:24.061409    4013 request.go:629] Waited for 195.531329ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvd5q
	I0805 16:09:24.061446    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvd5q
	I0805 16:09:24.061452    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:24.061491    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:24.061496    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:24.063747    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:24.261469    4013 request.go:629] Waited for 197.298268ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:09:24.261565    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:09:24.261573    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:24.261581    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:24.261587    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:24.264861    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:24.265277    4013 pod_ready.go:97] node "ha-968000-m02" hosting pod "kube-proxy-fvd5q" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-968000-m02" has status "Ready":"False"
	I0805 16:09:24.265288    4013 pod_ready.go:81] duration metric: took 399.450273ms for pod "kube-proxy-fvd5q" in "kube-system" namespace to be "Ready" ...
	E0805 16:09:24.265296    4013 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-968000-m02" hosting pod "kube-proxy-fvd5q" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-968000-m02" has status "Ready":"False"
	I0805 16:09:24.265301    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p4xgk" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:24.461481    4013 request.go:629] Waited for 196.027245ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p4xgk
	I0805 16:09:24.461559    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p4xgk
	I0805 16:09:24.461578    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:24.461590    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:24.461596    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:24.464886    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:24.661858    4013 request.go:629] Waited for 196.151825ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:09:24.662024    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:09:24.662034    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:24.662044    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:24.662050    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:24.665229    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:24.665765    4013 pod_ready.go:92] pod "kube-proxy-p4xgk" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:24.665774    4013 pod_ready.go:81] duration metric: took 400.467773ms for pod "kube-proxy-p4xgk" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:24.665781    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qptt6" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:24.861504    4013 request.go:629] Waited for 195.677553ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qptt6
	I0805 16:09:24.861566    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qptt6
	I0805 16:09:24.861577    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:24.861588    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:24.861595    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:24.865839    4013 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 16:09:25.061918    4013 request.go:629] Waited for 195.700422ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m04
	I0805 16:09:25.061988    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m04
	I0805 16:09:25.061994    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:25.062000    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:25.062004    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:25.063765    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:09:25.064046    4013 pod_ready.go:92] pod "kube-proxy-qptt6" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:25.064056    4013 pod_ready.go:81] duration metric: took 398.270559ms for pod "kube-proxy-qptt6" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:25.064065    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v87jb" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:25.261506    4013 request.go:629] Waited for 197.352793ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v87jb
	I0805 16:09:25.261554    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v87jb
	I0805 16:09:25.261563    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:25.261573    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:25.261582    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:25.264807    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:25.461565    4013 request.go:629] Waited for 196.17837ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:25.461605    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:25.461613    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:25.461621    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:25.461625    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:25.464575    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:25.464951    4013 pod_ready.go:92] pod "kube-proxy-v87jb" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:25.464960    4013 pod_ready.go:81] duration metric: took 400.887094ms for pod "kube-proxy-v87jb" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:25.464982    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:25.662277    4013 request.go:629] Waited for 197.19961ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000
	I0805 16:09:25.662316    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000
	I0805 16:09:25.662325    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:25.662333    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:25.662339    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:25.664596    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:25.861101    4013 request.go:629] Waited for 196.140125ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:25.861136    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:25.861142    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:25.861149    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:25.861155    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:25.863555    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:25.863937    4013 pod_ready.go:92] pod "kube-scheduler-ha-968000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:25.863947    4013 pod_ready.go:81] duration metric: took 398.956028ms for pod "kube-scheduler-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:25.863960    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:26.061952    4013 request.go:629] Waited for 197.955177ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000-m02
	I0805 16:09:26.062048    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000-m02
	I0805 16:09:26.062057    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:26.062065    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:26.062070    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:26.064556    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:26.262140    4013 request.go:629] Waited for 197.126449ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:09:26.262175    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:09:26.262180    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:26.262186    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:26.262190    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:26.264203    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:26.264592    4013 pod_ready.go:97] node "ha-968000-m02" hosting pod "kube-scheduler-ha-968000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-968000-m02" has status "Ready":"False"
	I0805 16:09:26.264603    4013 pod_ready.go:81] duration metric: took 400.638133ms for pod "kube-scheduler-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	E0805 16:09:26.264611    4013 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-968000-m02" hosting pod "kube-scheduler-ha-968000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-968000-m02" has status "Ready":"False"
	I0805 16:09:26.264615    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:26.461402    4013 request.go:629] Waited for 196.72911ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000-m03
	I0805 16:09:26.461551    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000-m03
	I0805 16:09:26.461563    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:26.461573    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:26.461580    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:26.465124    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:26.661745    4013 request.go:629] Waited for 196.148221ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:09:26.661836    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:09:26.661842    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:26.661848    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:26.661852    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:26.663931    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:26.664273    4013 pod_ready.go:92] pod "kube-scheduler-ha-968000-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:26.664282    4013 pod_ready.go:81] duration metric: took 399.661598ms for pod "kube-scheduler-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:26.664289    4013 pod_ready.go:38] duration metric: took 5.403682263s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:09:26.664305    4013 api_server.go:52] waiting for apiserver process to appear ...
	I0805 16:09:26.664365    4013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:09:26.676043    4013 api_server.go:72] duration metric: took 13.820707254s to wait for apiserver process to appear ...
	I0805 16:09:26.676055    4013 api_server.go:88] waiting for apiserver healthz status ...
	I0805 16:09:26.676075    4013 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0805 16:09:26.679244    4013 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0805 16:09:26.679280    4013 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0805 16:09:26.679287    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:26.679294    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:26.679298    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:26.679920    4013 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:09:26.680031    4013 api_server.go:141] control plane version: v1.30.3
	I0805 16:09:26.680044    4013 api_server.go:131] duration metric: took 3.983266ms to wait for apiserver health ...
	I0805 16:09:26.680049    4013 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 16:09:26.861214    4013 request.go:629] Waited for 181.081617ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0805 16:09:26.861259    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0805 16:09:26.861267    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:26.861278    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:26.861307    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:26.876137    4013 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0805 16:09:26.882111    4013 system_pods.go:59] 26 kube-system pods found
	I0805 16:09:26.882136    4013 system_pods.go:61] "coredns-7db6d8ff4d-hjp5z" [e31fd97b-2727-4db3-a17c-3302c320832b] Running
	I0805 16:09:26.882140    4013 system_pods.go:61] "coredns-7db6d8ff4d-mfzln" [ea5c136e-84a6-4253-8f61-85c427b83840] Running
	I0805 16:09:26.882143    4013 system_pods.go:61] "etcd-ha-968000" [24590478-199e-4d78-8312-3d5924d6e915] Running
	I0805 16:09:26.882146    4013 system_pods.go:61] "etcd-ha-968000-m02" [cefe6f5a-3a87-4ccf-9419-0b864275c9c9] Running
	I0805 16:09:26.882149    4013 system_pods.go:61] "etcd-ha-968000-m03" [ec752887-5a12-4888-ba88-3fb5d54c6ce7] Running
	I0805 16:09:26.882151    4013 system_pods.go:61] "kindnet-5dshm" [2641d2a9-a26a-4cbe-b8ea-99ed7c7af43c] Running
	I0805 16:09:26.882153    4013 system_pods.go:61] "kindnet-cglm9" [80a5d2ca-3d9f-4347-bb68-cd6eac4e4aa8] Running
	I0805 16:09:26.882156    4013 system_pods.go:61] "kindnet-fp5ns" [bf9c4454-9491-4a21-8f0a-6c6f21919551] Running
	I0805 16:09:26.882158    4013 system_pods.go:61] "kindnet-qh6l6" [382ac149-5a4e-4fe4-aaaa-9c929c93b101] Running
	I0805 16:09:26.882161    4013 system_pods.go:61] "kube-apiserver-ha-968000" [04e9a721-eb6e-47b4-a7f0-2cad1ee201f7] Running
	I0805 16:09:26.882164    4013 system_pods.go:61] "kube-apiserver-ha-968000-m02" [0465a825-6697-4a98-bb88-18df7929a5dd] Running
	I0805 16:09:26.882166    4013 system_pods.go:61] "kube-apiserver-ha-968000-m03" [a0d3fc83-9820-463e-81bb-2abcb1b4c868] Running
	I0805 16:09:26.882169    4013 system_pods.go:61] "kube-controller-manager-ha-968000" [2078d070-21b4-4d47-a4d3-b130fa8b3aaf] Running
	I0805 16:09:26.882171    4013 system_pods.go:61] "kube-controller-manager-ha-968000-m02" [f0a1cc06-05bb-4efa-9a53-ebccba2b5f9e] Running
	I0805 16:09:26.882174    4013 system_pods.go:61] "kube-controller-manager-ha-968000-m03" [d140abba-93f2-4062-8ee8-3918ff5ae882] Running
	I0805 16:09:26.882176    4013 system_pods.go:61] "kube-proxy-fvd5q" [f2f13535-5802-4a1c-8243-48de42b79e74] Running
	I0805 16:09:26.882179    4013 system_pods.go:61] "kube-proxy-p4xgk" [aaca6036-f95c-44fb-a358-5ac881148fa4] Running
	I0805 16:09:26.882182    4013 system_pods.go:61] "kube-proxy-qptt6" [a826a636-1d05-4cca-a56d-d25a9cf41506] Running
	I0805 16:09:26.882184    4013 system_pods.go:61] "kube-proxy-v87jb" [d98f61ac-3a61-452c-8507-7258a9703c15] Running
	I0805 16:09:26.882188    4013 system_pods.go:61] "kube-scheduler-ha-968000" [20bf4b5e-71a1-4708-bb6a-34b0e44f196d] Running
	I0805 16:09:26.882190    4013 system_pods.go:61] "kube-scheduler-ha-968000-m02" [e590d5bf-9517-433b-9759-5b0f16cfe9a9] Running
	I0805 16:09:26.882193    4013 system_pods.go:61] "kube-scheduler-ha-968000-m03" [91120005-f0b0-47d5-a91c-c06b12e6da3e] Running
	I0805 16:09:26.882197    4013 system_pods.go:61] "kube-vip-ha-968000" [373808d0-e9f2-4cea-a7b6-98b309fac6e7] Running
	I0805 16:09:26.882201    4013 system_pods.go:61] "kube-vip-ha-968000-m02" [713fc36a-5582-464c-82d3-02905c81b753] Running
	I0805 16:09:26.882204    4013 system_pods.go:61] "kube-vip-ha-968000-m03" [d94a7e1c-9ddd-4229-b4cd-ac05384dd20a] Running
	I0805 16:09:26.882207    4013 system_pods.go:61] "storage-provisioner" [52e2952a-756d-4f65-84f5-588cb6563297] Running
	I0805 16:09:26.882211    4013 system_pods.go:74] duration metric: took 202.157859ms to wait for pod list to return data ...
	I0805 16:09:26.882216    4013 default_sa.go:34] waiting for default service account to be created ...
	I0805 16:09:27.061417    4013 request.go:629] Waited for 179.110016ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:09:27.061534    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:09:27.061546    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:27.061557    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:27.061563    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:27.065177    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:27.065383    4013 default_sa.go:45] found service account: "default"
	I0805 16:09:27.065396    4013 default_sa.go:55] duration metric: took 183.174105ms for default service account to be created ...
	I0805 16:09:27.065406    4013 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 16:09:27.262565    4013 request.go:629] Waited for 197.034728ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0805 16:09:27.262625    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0805 16:09:27.262635    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:27.262646    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:27.262654    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:27.268433    4013 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0805 16:09:27.273328    4013 system_pods.go:86] 26 kube-system pods found
	I0805 16:09:27.273339    4013 system_pods.go:89] "coredns-7db6d8ff4d-hjp5z" [e31fd97b-2727-4db3-a17c-3302c320832b] Running
	I0805 16:09:27.273344    4013 system_pods.go:89] "coredns-7db6d8ff4d-mfzln" [ea5c136e-84a6-4253-8f61-85c427b83840] Running
	I0805 16:09:27.273348    4013 system_pods.go:89] "etcd-ha-968000" [24590478-199e-4d78-8312-3d5924d6e915] Running
	I0805 16:09:27.273351    4013 system_pods.go:89] "etcd-ha-968000-m02" [cefe6f5a-3a87-4ccf-9419-0b864275c9c9] Running
	I0805 16:09:27.273354    4013 system_pods.go:89] "etcd-ha-968000-m03" [ec752887-5a12-4888-ba88-3fb5d54c6ce7] Running
	I0805 16:09:27.273358    4013 system_pods.go:89] "kindnet-5dshm" [2641d2a9-a26a-4cbe-b8ea-99ed7c7af43c] Running
	I0805 16:09:27.273361    4013 system_pods.go:89] "kindnet-cglm9" [80a5d2ca-3d9f-4347-bb68-cd6eac4e4aa8] Running
	I0805 16:09:27.273365    4013 system_pods.go:89] "kindnet-fp5ns" [bf9c4454-9491-4a21-8f0a-6c6f21919551] Running
	I0805 16:09:27.273369    4013 system_pods.go:89] "kindnet-qh6l6" [382ac149-5a4e-4fe4-aaaa-9c929c93b101] Running
	I0805 16:09:27.273372    4013 system_pods.go:89] "kube-apiserver-ha-968000" [04e9a721-eb6e-47b4-a7f0-2cad1ee201f7] Running
	I0805 16:09:27.273376    4013 system_pods.go:89] "kube-apiserver-ha-968000-m02" [0465a825-6697-4a98-bb88-18df7929a5dd] Running
	I0805 16:09:27.273380    4013 system_pods.go:89] "kube-apiserver-ha-968000-m03" [a0d3fc83-9820-463e-81bb-2abcb1b4c868] Running
	I0805 16:09:27.273383    4013 system_pods.go:89] "kube-controller-manager-ha-968000" [2078d070-21b4-4d47-a4d3-b130fa8b3aaf] Running
	I0805 16:09:27.273387    4013 system_pods.go:89] "kube-controller-manager-ha-968000-m02" [f0a1cc06-05bb-4efa-9a53-ebccba2b5f9e] Running
	I0805 16:09:27.273393    4013 system_pods.go:89] "kube-controller-manager-ha-968000-m03" [d140abba-93f2-4062-8ee8-3918ff5ae882] Running
	I0805 16:09:27.273398    4013 system_pods.go:89] "kube-proxy-fvd5q" [f2f13535-5802-4a1c-8243-48de42b79e74] Running
	I0805 16:09:27.273401    4013 system_pods.go:89] "kube-proxy-p4xgk" [aaca6036-f95c-44fb-a358-5ac881148fa4] Running
	I0805 16:09:27.273408    4013 system_pods.go:89] "kube-proxy-qptt6" [a826a636-1d05-4cca-a56d-d25a9cf41506] Running
	I0805 16:09:27.273412    4013 system_pods.go:89] "kube-proxy-v87jb" [d98f61ac-3a61-452c-8507-7258a9703c15] Running
	I0805 16:09:27.273415    4013 system_pods.go:89] "kube-scheduler-ha-968000" [20bf4b5e-71a1-4708-bb6a-34b0e44f196d] Running
	I0805 16:09:27.273419    4013 system_pods.go:89] "kube-scheduler-ha-968000-m02" [e590d5bf-9517-433b-9759-5b0f16cfe9a9] Running
	I0805 16:09:27.273422    4013 system_pods.go:89] "kube-scheduler-ha-968000-m03" [91120005-f0b0-47d5-a91c-c06b12e6da3e] Running
	I0805 16:09:27.273426    4013 system_pods.go:89] "kube-vip-ha-968000" [373808d0-e9f2-4cea-a7b6-98b309fac6e7] Running
	I0805 16:09:27.273429    4013 system_pods.go:89] "kube-vip-ha-968000-m02" [713fc36a-5582-464c-82d3-02905c81b753] Running
	I0805 16:09:27.273433    4013 system_pods.go:89] "kube-vip-ha-968000-m03" [d94a7e1c-9ddd-4229-b4cd-ac05384dd20a] Running
	I0805 16:09:27.273450    4013 system_pods.go:89] "storage-provisioner" [52e2952a-756d-4f65-84f5-588cb6563297] Running
	I0805 16:09:27.273458    4013 system_pods.go:126] duration metric: took 208.046004ms to wait for k8s-apps to be running ...
	I0805 16:09:27.273468    4013 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 16:09:27.273520    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:09:27.285035    4013 system_svc.go:56] duration metric: took 11.567511ms WaitForService to wait for kubelet
	I0805 16:09:27.285048    4013 kubeadm.go:582] duration metric: took 14.42971445s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:09:27.285060    4013 node_conditions.go:102] verifying NodePressure condition ...
	I0805 16:09:27.461886    4013 request.go:629] Waited for 176.780844ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0805 16:09:27.461995    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0805 16:09:27.462013    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:27.462026    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:27.462035    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:27.465297    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:27.466219    4013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:09:27.466232    4013 node_conditions.go:123] node cpu capacity is 2
	I0805 16:09:27.466242    4013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:09:27.466246    4013 node_conditions.go:123] node cpu capacity is 2
	I0805 16:09:27.466249    4013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:09:27.466253    4013 node_conditions.go:123] node cpu capacity is 2
	I0805 16:09:27.466256    4013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:09:27.466259    4013 node_conditions.go:123] node cpu capacity is 2
	I0805 16:09:27.466262    4013 node_conditions.go:105] duration metric: took 181.199284ms to run NodePressure ...
	I0805 16:09:27.466271    4013 start.go:241] waiting for startup goroutines ...
	I0805 16:09:27.466288    4013 start.go:255] writing updated cluster config ...
	I0805 16:09:27.488716    4013 out.go:177] 
	I0805 16:09:27.508938    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:09:27.509085    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:09:27.531540    4013 out.go:177] * Starting "ha-968000-m03" control-plane node in "ha-968000" cluster
	I0805 16:09:27.573486    4013 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:09:27.573507    4013 cache.go:56] Caching tarball of preloaded images
	I0805 16:09:27.573613    4013 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:09:27.573623    4013 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:09:27.573701    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:09:27.574588    4013 start.go:360] acquireMachinesLock for ha-968000-m03: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:09:27.574644    4013 start.go:364] duration metric: took 42.919µs to acquireMachinesLock for "ha-968000-m03"
	I0805 16:09:27.574659    4013 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:09:27.574662    4013 fix.go:54] fixHost starting: m03
	I0805 16:09:27.574910    4013 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:09:27.574930    4013 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:09:27.583789    4013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51937
	I0805 16:09:27.584141    4013 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:09:27.584476    4013 main.go:141] libmachine: Using API Version  1
	I0805 16:09:27.584490    4013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:09:27.584707    4013 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:09:27.584816    4013 main.go:141] libmachine: (ha-968000-m03) Calling .DriverName
	I0805 16:09:27.584907    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetState
	I0805 16:09:27.584990    4013 main.go:141] libmachine: (ha-968000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:09:27.585071    4013 main.go:141] libmachine: (ha-968000-m03) DBG | hyperkit pid from json: 3471
	I0805 16:09:27.585977    4013 main.go:141] libmachine: (ha-968000-m03) DBG | hyperkit pid 3471 missing from process table
	I0805 16:09:27.585998    4013 fix.go:112] recreateIfNeeded on ha-968000-m03: state=Stopped err=<nil>
	I0805 16:09:27.586006    4013 main.go:141] libmachine: (ha-968000-m03) Calling .DriverName
	W0805 16:09:27.586083    4013 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:09:27.606653    4013 out.go:177] * Restarting existing hyperkit VM for "ha-968000-m03" ...
	I0805 16:09:27.648666    4013 main.go:141] libmachine: (ha-968000-m03) Calling .Start
	I0805 16:09:27.648869    4013 main.go:141] libmachine: (ha-968000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:09:27.648916    4013 main.go:141] libmachine: (ha-968000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/hyperkit.pid
	I0805 16:09:27.650524    4013 main.go:141] libmachine: (ha-968000-m03) DBG | hyperkit pid 3471 missing from process table
	I0805 16:09:27.650545    4013 main.go:141] libmachine: (ha-968000-m03) DBG | pid 3471 is in state "Stopped"
	I0805 16:09:27.650562    4013 main.go:141] libmachine: (ha-968000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/hyperkit.pid...
	I0805 16:09:27.650769    4013 main.go:141] libmachine: (ha-968000-m03) DBG | Using UUID 2e5bd4cb-7666-4039-8bdc-5eded2ad114e
	I0805 16:09:27.679630    4013 main.go:141] libmachine: (ha-968000-m03) DBG | Generated MAC 5e:e5:6c:f1:60:ca
	I0805 16:09:27.679657    4013 main.go:141] libmachine: (ha-968000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000
	I0805 16:09:27.679792    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2e5bd4cb-7666-4039-8bdc-5eded2ad114e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003acae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:09:27.679833    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2e5bd4cb-7666-4039-8bdc-5eded2ad114e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003acae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:09:27.679876    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2e5bd4cb-7666-4039-8bdc-5eded2ad114e", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/ha-968000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machine
s/ha-968000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000"}
	I0805 16:09:27.679918    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2e5bd4cb-7666-4039-8bdc-5eded2ad114e -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/ha-968000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000"
	I0805 16:09:27.679930    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:09:27.681441    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 DEBUG: hyperkit: Pid is 4050
	I0805 16:09:27.681855    4013 main.go:141] libmachine: (ha-968000-m03) DBG | Attempt 0
	I0805 16:09:27.681870    4013 main.go:141] libmachine: (ha-968000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:09:27.681942    4013 main.go:141] libmachine: (ha-968000-m03) DBG | hyperkit pid from json: 4050
	I0805 16:09:27.684086    4013 main.go:141] libmachine: (ha-968000-m03) DBG | Searching for 5e:e5:6c:f1:60:ca in /var/db/dhcpd_leases ...
	I0805 16:09:27.684171    4013 main.go:141] libmachine: (ha-968000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0805 16:09:27.684192    4013 main.go:141] libmachine: (ha-968000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:09:27.684213    4013 main.go:141] libmachine: (ha-968000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2acfd}
	I0805 16:09:27.684223    4013 main.go:141] libmachine: (ha-968000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b15b5a}
	I0805 16:09:27.684257    4013 main.go:141] libmachine: (ha-968000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b2ac1c}
	I0805 16:09:27.684275    4013 main.go:141] libmachine: (ha-968000-m03) DBG | Found match: 5e:e5:6c:f1:60:ca
	I0805 16:09:27.684281    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetConfigRaw
	I0805 16:09:27.684302    4013 main.go:141] libmachine: (ha-968000-m03) DBG | IP: 192.169.0.7
	I0805 16:09:27.684999    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetIP
	I0805 16:09:27.685240    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:09:27.685658    4013 machine.go:94] provisionDockerMachine start ...
	I0805 16:09:27.685674    4013 main.go:141] libmachine: (ha-968000-m03) Calling .DriverName
	I0805 16:09:27.685796    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:09:27.685888    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:09:27.685972    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:09:27.686054    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:09:27.686136    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:09:27.686243    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:09:27.686399    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0805 16:09:27.686406    4013 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 16:09:27.689026    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:09:27.697927    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:09:27.698811    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:09:27.698833    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:09:27.698857    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:09:27.698876    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:09:28.083003    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:09:28.083019    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:09:28.198118    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:09:28.198136    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:09:28.198156    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:09:28.198170    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:09:28.198987    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:09:28.198999    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:09:33.906297    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:33 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:09:33.906335    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:33 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:09:33.906345    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:33 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:09:33.929592    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:33 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:10:02.753110    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 16:10:02.753128    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetMachineName
	I0805 16:10:02.753270    4013 buildroot.go:166] provisioning hostname "ha-968000-m03"
	I0805 16:10:02.753282    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetMachineName
	I0805 16:10:02.753381    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:02.753472    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:02.753543    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:02.753631    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:02.753716    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:02.753836    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:10:02.753997    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0805 16:10:02.754006    4013 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-968000-m03 && echo "ha-968000-m03" | sudo tee /etc/hostname
	I0805 16:10:02.815926    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-968000-m03
	
	I0805 16:10:02.815941    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:02.816075    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:02.816178    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:02.816265    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:02.816353    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:02.816497    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:10:02.816655    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0805 16:10:02.816667    4013 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-968000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-968000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-968000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:10:02.874015    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:10:02.874031    4013 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:10:02.874040    4013 buildroot.go:174] setting up certificates
	I0805 16:10:02.874046    4013 provision.go:84] configureAuth start
	I0805 16:10:02.874053    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetMachineName
	I0805 16:10:02.874189    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetIP
	I0805 16:10:02.874289    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:02.874374    4013 provision.go:143] copyHostCerts
	I0805 16:10:02.874402    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:10:02.874450    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:10:02.874455    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:10:02.874582    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:10:02.874781    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:10:02.874825    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:10:02.874830    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:10:02.874901    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:10:02.875047    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:10:02.875075    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:10:02.875079    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:10:02.875146    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:10:02.875295    4013 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.ha-968000-m03 san=[127.0.0.1 192.169.0.7 ha-968000-m03 localhost minikube]
	I0805 16:10:03.100424    4013 provision.go:177] copyRemoteCerts
	I0805 16:10:03.100475    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:10:03.100489    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:03.100628    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:03.100734    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:03.100820    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:03.100908    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/id_rsa Username:docker}
	I0805 16:10:03.133644    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:10:03.133711    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:10:03.152881    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:10:03.152956    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0805 16:10:03.172153    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:10:03.172226    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 16:10:03.192347    4013 provision.go:87] duration metric: took 318.292468ms to configureAuth
	I0805 16:10:03.192362    4013 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:10:03.192542    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:10:03.192555    4013 main.go:141] libmachine: (ha-968000-m03) Calling .DriverName
	I0805 16:10:03.192694    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:03.192785    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:03.192880    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:03.192966    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:03.193041    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:03.193164    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:10:03.193316    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0805 16:10:03.193325    4013 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:10:03.244032    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:10:03.244045    4013 buildroot.go:70] root file system type: tmpfs
	I0805 16:10:03.244123    4013 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:10:03.244135    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:03.244259    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:03.244342    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:03.244429    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:03.244514    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:03.244643    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:10:03.244779    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0805 16:10:03.244826    4013 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:10:03.306704    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:10:03.306723    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:03.306859    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:03.306950    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:03.307037    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:03.307124    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:03.307256    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:10:03.307400    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0805 16:10:03.307414    4013 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:10:04.932560    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:10:04.932575    4013 machine.go:97] duration metric: took 37.246896971s to provisionDockerMachine
	I0805 16:10:04.932584    4013 start.go:293] postStartSetup for "ha-968000-m03" (driver="hyperkit")
	I0805 16:10:04.932592    4013 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:10:04.932606    4013 main.go:141] libmachine: (ha-968000-m03) Calling .DriverName
	I0805 16:10:04.932806    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:10:04.932820    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:04.932921    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:04.933017    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:04.933114    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:04.933199    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/id_rsa Username:docker}
	I0805 16:10:04.965742    4013 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:10:04.968779    4013 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:10:04.968789    4013 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:10:04.968872    4013 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:10:04.969009    4013 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:10:04.969015    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:10:04.969171    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:10:04.977326    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:10:04.996442    4013 start.go:296] duration metric: took 63.849242ms for postStartSetup
	I0805 16:10:04.996464    4013 main.go:141] libmachine: (ha-968000-m03) Calling .DriverName
	I0805 16:10:04.996645    4013 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0805 16:10:04.996658    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:04.996749    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:04.996835    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:04.996919    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:04.996988    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/id_rsa Username:docker}
	I0805 16:10:05.029923    4013 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0805 16:10:05.029990    4013 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0805 16:10:05.062439    4013 fix.go:56] duration metric: took 37.48776057s for fixHost
	I0805 16:10:05.062463    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:05.062605    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:05.062687    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:05.062782    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:05.062875    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:05.062995    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:10:05.063135    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0805 16:10:05.063142    4013 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 16:10:05.114144    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722899405.020487015
	
	I0805 16:10:05.114159    4013 fix.go:216] guest clock: 1722899405.020487015
	I0805 16:10:05.114164    4013 fix.go:229] Guest: 2024-08-05 16:10:05.020487015 -0700 PDT Remote: 2024-08-05 16:10:05.062453 -0700 PDT m=+89.419854401 (delta=-41.965985ms)
	I0805 16:10:05.114175    4013 fix.go:200] guest clock delta is within tolerance: -41.965985ms
	I0805 16:10:05.114179    4013 start.go:83] releasing machines lock for "ha-968000-m03", held for 37.53951612s
	I0805 16:10:05.114196    4013 main.go:141] libmachine: (ha-968000-m03) Calling .DriverName
	I0805 16:10:05.114320    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetIP
	I0805 16:10:05.154856    4013 out.go:177] * Found network options:
	I0805 16:10:05.196438    4013 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0805 16:10:05.217521    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	W0805 16:10:05.217542    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:10:05.217557    4013 main.go:141] libmachine: (ha-968000-m03) Calling .DriverName
	I0805 16:10:05.218022    4013 main.go:141] libmachine: (ha-968000-m03) Calling .DriverName
	I0805 16:10:05.218155    4013 main.go:141] libmachine: (ha-968000-m03) Calling .DriverName
	I0805 16:10:05.218244    4013 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:10:05.218267    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	W0805 16:10:05.218289    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	W0805 16:10:05.218305    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:10:05.218380    4013 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 16:10:05.218396    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:05.218397    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:05.218547    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:05.218562    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:05.218682    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:05.218701    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:05.218796    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/id_rsa Username:docker}
	I0805 16:10:05.218817    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:05.218922    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/id_rsa Username:docker}
	W0805 16:10:05.247739    4013 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:10:05.247807    4013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:10:05.295633    4013 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:10:05.295651    4013 start.go:495] detecting cgroup driver to use...
	I0805 16:10:05.295736    4013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:10:05.311187    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:10:05.320167    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:10:05.328956    4013 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:10:05.329006    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:10:05.337987    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:10:05.346989    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:10:05.356292    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:10:05.365468    4013 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:10:05.374794    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:10:05.383659    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:10:05.392613    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:10:05.401497    4013 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:10:05.409761    4013 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:10:05.417735    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:10:05.522068    4013 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:10:05.541086    4013 start.go:495] detecting cgroup driver to use...
	I0805 16:10:05.541154    4013 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:10:05.560931    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:10:05.572370    4013 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:10:05.590083    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:10:05.601381    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:10:05.612999    4013 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:10:05.640303    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:10:05.651924    4013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:10:05.666834    4013 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:10:05.669785    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:10:05.677888    4013 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:10:05.691535    4013 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:10:05.794601    4013 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:10:05.896489    4013 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:10:05.896516    4013 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:10:05.916844    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:10:06.013180    4013 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:10:08.281931    4013 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.2687312s)
	I0805 16:10:08.281998    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0805 16:10:08.292879    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:10:08.303134    4013 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0805 16:10:08.403828    4013 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0805 16:10:08.520343    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:10:08.633419    4013 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0805 16:10:08.648137    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:10:08.659447    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:10:08.754463    4013 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0805 16:10:08.821178    4013 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0805 16:10:08.821256    4013 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0805 16:10:08.825268    4013 start.go:563] Will wait 60s for crictl version
	I0805 16:10:08.825311    4013 ssh_runner.go:195] Run: which crictl
	I0805 16:10:08.828380    4013 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 16:10:08.856405    4013 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0805 16:10:08.856477    4013 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:10:08.873070    4013 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:10:08.917245    4013 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0805 16:10:08.958050    4013 out.go:177]   - env NO_PROXY=192.169.0.5
	I0805 16:10:08.978959    4013 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0805 16:10:08.999958    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetIP
	I0805 16:10:09.000163    4013 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0805 16:10:09.003143    4013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:10:09.012521    4013 mustload.go:65] Loading cluster: ha-968000
	I0805 16:10:09.012700    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:10:09.012919    4013 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:10:09.012941    4013 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:10:09.021950    4013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51959
	I0805 16:10:09.022290    4013 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:10:09.022650    4013 main.go:141] libmachine: Using API Version  1
	I0805 16:10:09.022672    4013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:10:09.022912    4013 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:10:09.023042    4013 main.go:141] libmachine: (ha-968000) Calling .GetState
	I0805 16:10:09.023120    4013 main.go:141] libmachine: (ha-968000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:10:09.023210    4013 main.go:141] libmachine: (ha-968000) DBG | hyperkit pid from json: 4025
	I0805 16:10:09.024146    4013 host.go:66] Checking if "ha-968000" exists ...
	I0805 16:10:09.024412    4013 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:10:09.024436    4013 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:10:09.033094    4013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51961
	I0805 16:10:09.033420    4013 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:10:09.033772    4013 main.go:141] libmachine: Using API Version  1
	I0805 16:10:09.033792    4013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:10:09.034017    4013 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:10:09.034135    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:10:09.034227    4013 certs.go:68] Setting up /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000 for IP: 192.169.0.7
	I0805 16:10:09.034233    4013 certs.go:194] generating shared ca certs ...
	I0805 16:10:09.034246    4013 certs.go:226] acquiring lock for ca certs: {Name:mkb83e058d89c7d4e66f4136f377a3c305b13735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:10:09.034388    4013 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key
	I0805 16:10:09.034442    4013 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key
	I0805 16:10:09.034452    4013 certs.go:256] generating profile certs ...
	I0805 16:10:09.034546    4013 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/client.key
	I0805 16:10:09.034648    4013 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key.526236ea
	I0805 16:10:09.034697    4013 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.key
	I0805 16:10:09.034704    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 16:10:09.034725    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 16:10:09.034745    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 16:10:09.034764    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 16:10:09.034786    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 16:10:09.034809    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 16:10:09.034828    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 16:10:09.034845    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 16:10:09.034929    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem (1338 bytes)
	W0805 16:10:09.034968    4013 certs.go:480] ignoring /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678_empty.pem, impossibly tiny 0 bytes
	I0805 16:10:09.034982    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 16:10:09.035017    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem (1082 bytes)
	I0805 16:10:09.035050    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem (1123 bytes)
	I0805 16:10:09.035079    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem (1675 bytes)
	I0805 16:10:09.035147    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:10:09.035187    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:10:09.035213    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem -> /usr/share/ca-certificates/1678.pem
	I0805 16:10:09.035232    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /usr/share/ca-certificates/16782.pem
	I0805 16:10:09.035261    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:10:09.035348    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:10:09.035432    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:10:09.035523    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:10:09.035597    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/id_rsa Username:docker}
	I0805 16:10:09.068818    4013 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0805 16:10:09.072729    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0805 16:10:09.083911    4013 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0805 16:10:09.087068    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0805 16:10:09.096135    4013 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0805 16:10:09.099562    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0805 16:10:09.109334    4013 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0805 16:10:09.112743    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0805 16:10:09.122244    4013 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0805 16:10:09.125580    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0805 16:10:09.134471    4013 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0805 16:10:09.137936    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0805 16:10:09.147798    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 16:10:09.168268    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 16:10:09.188512    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 16:10:09.208613    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0805 16:10:09.229102    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0805 16:10:09.248927    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 16:10:09.269438    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 16:10:09.289326    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 16:10:09.309414    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 16:10:09.329327    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem --> /usr/share/ca-certificates/1678.pem (1338 bytes)
	I0805 16:10:09.349275    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /usr/share/ca-certificates/16782.pem (1708 bytes)
	I0805 16:10:09.369465    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0805 16:10:09.383270    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0805 16:10:09.397217    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0805 16:10:09.410973    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0805 16:10:09.424636    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0805 16:10:09.438657    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0805 16:10:09.453241    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0805 16:10:09.467220    4013 ssh_runner.go:195] Run: openssl version
	I0805 16:10:09.471496    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 16:10:09.479975    4013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:10:09.483494    4013 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:10:09.483535    4013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:10:09.487639    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 16:10:09.496028    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1678.pem && ln -fs /usr/share/ca-certificates/1678.pem /etc/ssl/certs/1678.pem"
	I0805 16:10:09.504248    4013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1678.pem
	I0805 16:10:09.507546    4013 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 22:58 /usr/share/ca-certificates/1678.pem
	I0805 16:10:09.507582    4013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1678.pem
	I0805 16:10:09.511833    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1678.pem /etc/ssl/certs/51391683.0"
	I0805 16:10:09.520110    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16782.pem && ln -fs /usr/share/ca-certificates/16782.pem /etc/ssl/certs/16782.pem"
	I0805 16:10:09.528467    4013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16782.pem
	I0805 16:10:09.531788    4013 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 22:58 /usr/share/ca-certificates/16782.pem
	I0805 16:10:09.531831    4013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16782.pem
	I0805 16:10:09.536023    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16782.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 16:10:09.544245    4013 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 16:10:09.547794    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 16:10:09.552109    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 16:10:09.556303    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 16:10:09.560442    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 16:10:09.564725    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 16:10:09.569207    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 16:10:09.573628    4013 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.30.3 docker true true} ...
	I0805 16:10:09.573688    4013 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-968000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-968000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 16:10:09.573706    4013 kube-vip.go:115] generating kube-vip config ...
	I0805 16:10:09.573746    4013 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0805 16:10:09.586333    4013 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0805 16:10:09.586392    4013 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0805 16:10:09.586454    4013 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 16:10:09.595015    4013 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 16:10:09.595072    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0805 16:10:09.604755    4013 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0805 16:10:09.618293    4013 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 16:10:09.632089    4013 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0805 16:10:09.645814    4013 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0805 16:10:09.648794    4013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:10:09.658221    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:10:09.755214    4013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:10:09.770035    4013 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:10:09.770231    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:10:09.791589    4013 out.go:177] * Verifying Kubernetes components...
	I0805 16:10:09.812147    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:10:09.922409    4013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:10:09.937680    4013 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:10:09.937905    4013 kapi.go:59] client config for ha-968000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x85c5060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0805 16:10:09.937943    4013 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0805 16:10:09.938123    4013 node_ready.go:35] waiting up to 6m0s for node "ha-968000-m03" to be "Ready" ...
	I0805 16:10:09.938166    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:09.938171    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:09.938177    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:09.938184    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:09.940537    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:09.940846    4013 node_ready.go:49] node "ha-968000-m03" has status "Ready":"True"
	I0805 16:10:09.940856    4013 node_ready.go:38] duration metric: took 2.724361ms for node "ha-968000-m03" to be "Ready" ...
	I0805 16:10:09.940863    4013 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:10:09.940900    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0805 16:10:09.940905    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:09.940911    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:09.940915    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:09.945944    4013 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0805 16:10:09.953862    4013 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hjp5z" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:09.953919    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hjp5z
	I0805 16:10:09.953924    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:09.953930    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:09.953934    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:09.956348    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:09.956979    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:09.956988    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:09.956994    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:09.956998    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:09.959221    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:09.959622    4013 pod_ready.go:92] pod "coredns-7db6d8ff4d-hjp5z" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:09.959632    4013 pod_ready.go:81] duration metric: took 5.75325ms for pod "coredns-7db6d8ff4d-hjp5z" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:09.959646    4013 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:09.959683    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:09.959688    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:09.959693    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:09.959697    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:09.961820    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:09.962245    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:09.962252    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:09.962258    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:09.962262    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:09.964245    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:10.460326    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:10.460341    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:10.460347    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:10.460351    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:10.462931    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:10.463525    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:10.463534    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:10.463540    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:10.463545    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:10.465741    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:10.960459    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:10.960479    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:10.960487    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:10.960490    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:10.964999    4013 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 16:10:10.965521    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:10.965531    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:10.965538    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:10.965541    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:10.968401    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:11.459862    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:11.459879    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:11.459888    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:11.459896    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:11.462705    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:11.463338    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:11.463348    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:11.463355    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:11.463359    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:11.465847    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:11.960724    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:11.960741    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:11.960748    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:11.960751    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:11.963442    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:11.963893    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:11.963902    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:11.963909    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:11.963915    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:11.966015    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:11.966351    4013 pod_ready.go:102] pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace has status "Ready":"False"
	I0805 16:10:12.460750    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:12.460767    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:12.460775    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:12.460780    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:12.463726    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:12.464380    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:12.464390    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:12.464397    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:12.464403    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:12.466771    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:12.959777    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:12.959794    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:12.959800    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:12.959803    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:12.963016    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:12.963521    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:12.963530    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:12.963537    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:12.963541    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:12.965964    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:13.461027    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:13.461044    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:13.461052    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:13.461056    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:13.463804    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:13.464772    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:13.464781    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:13.464789    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:13.464792    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:13.467029    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:13.961022    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:13.961082    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:13.961090    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:13.961093    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:13.963530    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:13.964018    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:13.964026    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:13.964037    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:13.964040    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:13.966396    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:13.966704    4013 pod_ready.go:102] pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace has status "Ready":"False"
	I0805 16:10:14.460972    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:14.461029    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:14.461037    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:14.461040    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:14.463269    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:14.463827    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:14.463834    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:14.463840    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:14.463844    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:14.465651    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:14.960796    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:14.960810    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:14.960817    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:14.960821    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:14.963503    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:14.964069    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:14.964076    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:14.964082    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:14.964085    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:14.965973    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:15.460976    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:15.461042    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:15.461054    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:15.461062    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:15.464639    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:15.465242    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:15.465250    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:15.465255    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:15.465259    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:15.467095    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:15.960558    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:15.960569    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:15.960575    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:15.960579    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:15.962733    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:15.963261    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:15.963268    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:15.963274    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:15.963278    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:15.964836    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:16.460120    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:16.460142    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:16.460150    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:16.460154    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:16.462634    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:16.463246    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:16.463254    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:16.463260    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:16.463264    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:16.464841    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:16.465283    4013 pod_ready.go:102] pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace has status "Ready":"False"
	I0805 16:10:16.959766    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:16.959781    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:16.959789    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:16.959792    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:16.962161    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:16.962538    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:16.962546    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:16.962551    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:16.962554    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:16.964199    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:17.459940    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:17.460028    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:17.460043    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:17.460058    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:17.463177    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:17.463929    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:17.463939    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:17.463947    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:17.463954    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:17.465814    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:17.960492    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:17.960517    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:17.960529    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:17.960535    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:17.963854    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:17.964340    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:17.964348    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:17.964354    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:17.964359    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:17.965846    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:18.459859    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:18.459922    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:18.459934    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:18.459943    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:18.463097    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:18.463745    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:18.463756    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:18.463764    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:18.463769    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:18.466108    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:18.466647    4013 pod_ready.go:102] pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace has status "Ready":"False"
	I0805 16:10:18.961260    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:18.961336    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:18.961346    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:18.961351    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:18.964473    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:18.964862    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:18.964870    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:18.964876    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:18.964879    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:18.966810    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:19.461327    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:19.461342    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:19.461349    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:19.461352    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:19.463586    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:19.464052    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:19.464061    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:19.464067    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:19.464071    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:19.465827    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:19.959893    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:19.959916    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:19.959928    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:19.959936    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:19.963708    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:19.964323    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:19.964330    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:19.964337    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:19.964341    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:19.966276    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:20.460973    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:20.460999    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:20.461012    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:20.461019    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:20.464211    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:20.464772    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:20.464780    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:20.464786    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:20.464790    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:20.466297    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:20.466755    4013 pod_ready.go:102] pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace has status "Ready":"False"
	I0805 16:10:20.960914    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:20.960928    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:20.960937    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:20.960940    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:20.963464    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:20.963838    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:20.963846    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:20.963851    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:20.963855    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:20.965570    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:21.461564    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:21.461601    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:21.461612    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:21.461617    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:21.464031    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:21.464425    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:21.464433    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:21.464439    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:21.464442    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:21.466022    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:21.960219    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:21.960247    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:21.960261    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:21.960271    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:21.963797    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:21.964415    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:21.964422    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:21.964428    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:21.964431    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:21.966018    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:22.460781    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:22.460829    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:22.460837    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:22.460841    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:22.463024    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:22.463683    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:22.463691    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:22.463697    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:22.463701    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:22.465467    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:22.960911    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:22.960935    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:22.960982    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:22.960999    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:22.964197    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:22.964786    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:22.964793    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:22.964799    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:22.964802    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:22.966466    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:22.966844    4013 pod_ready.go:92] pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:22.966853    4013 pod_ready.go:81] duration metric: took 13.007198003s for pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:22.966869    4013 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:22.966901    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-968000
	I0805 16:10:22.966906    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:22.966912    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:22.966916    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:22.968437    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:22.968826    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:22.968833    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:22.968839    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:22.968842    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:22.970427    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:22.970912    4013 pod_ready.go:92] pod "etcd-ha-968000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:22.970922    4013 pod_ready.go:81] duration metric: took 4.046965ms for pod "etcd-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:22.970928    4013 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:22.970963    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-968000-m02
	I0805 16:10:22.970968    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:22.970973    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:22.970978    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:22.972820    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:22.973377    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:10:22.973385    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:22.973391    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:22.973395    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:22.975041    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:22.975357    4013 pod_ready.go:92] pod "etcd-ha-968000-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:22.975366    4013 pod_ready.go:81] duration metric: took 4.433286ms for pod "etcd-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:22.975373    4013 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:22.975410    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-968000-m03
	I0805 16:10:22.975415    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:22.975421    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:22.975428    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:22.977033    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:22.977409    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:22.977416    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:22.977422    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:22.977425    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:22.978990    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:23.477076    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-968000-m03
	I0805 16:10:23.477102    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:23.477114    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:23.477120    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:23.480444    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:23.480920    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:23.480927    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:23.480934    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:23.480937    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:23.482684    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:23.976407    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-968000-m03
	I0805 16:10:23.976432    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:23.976443    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:23.976450    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:23.979450    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:23.979998    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:23.980005    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:23.980011    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:23.980015    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:23.981679    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:24.476784    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-968000-m03
	I0805 16:10:24.476798    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:24.476805    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:24.476814    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:24.479014    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:24.479514    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:24.479522    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:24.479528    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:24.479531    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:24.481269    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:24.481711    4013 pod_ready.go:92] pod "etcd-ha-968000-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:24.481720    4013 pod_ready.go:81] duration metric: took 1.506341693s for pod "etcd-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:24.481735    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:24.481776    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-968000
	I0805 16:10:24.481781    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:24.481787    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:24.481791    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:24.483526    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:24.483895    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:24.483903    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:24.483909    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:24.483913    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:24.485324    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:24.485707    4013 pod_ready.go:92] pod "kube-apiserver-ha-968000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:24.485716    4013 pod_ready.go:81] duration metric: took 3.976033ms for pod "kube-apiserver-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:24.485725    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:24.485755    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-968000-m02
	I0805 16:10:24.485761    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:24.485766    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:24.485771    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:24.487225    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:24.561028    4013 request.go:629] Waited for 73.447214ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:10:24.561115    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:10:24.561127    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:24.561139    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:24.561146    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:24.564386    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:24.564772    4013 pod_ready.go:92] pod "kube-apiserver-ha-968000-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:24.564785    4013 pod_ready.go:81] duration metric: took 79.054588ms for pod "kube-apiserver-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:24.564795    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:24.761641    4013 request.go:629] Waited for 196.793833ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-968000-m03
	I0805 16:10:24.761722    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-968000-m03
	I0805 16:10:24.761728    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:24.761734    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:24.761738    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:24.763753    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:24.961783    4013 request.go:629] Waited for 197.554669ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:24.961853    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:24.961860    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:24.961868    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:24.961872    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:24.964254    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:24.964712    4013 pod_ready.go:92] pod "kube-apiserver-ha-968000-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:24.964722    4013 pod_ready.go:81] duration metric: took 399.920246ms for pod "kube-apiserver-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:24.964728    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:25.161961    4013 request.go:629] Waited for 197.196834ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000
	I0805 16:10:25.162018    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000
	I0805 16:10:25.162024    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:25.162028    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:25.162032    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:25.164098    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:25.362062    4013 request.go:629] Waited for 197.590252ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:25.362143    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:25.362150    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:25.362158    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:25.362164    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:25.364469    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:25.364982    4013 pod_ready.go:92] pod "kube-controller-manager-ha-968000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:25.364995    4013 pod_ready.go:81] duration metric: took 400.260627ms for pod "kube-controller-manager-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:25.365004    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:25.561095    4013 request.go:629] Waited for 196.05214ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m02
	I0805 16:10:25.561139    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m02
	I0805 16:10:25.561147    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:25.561173    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:25.561180    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:25.563313    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:25.761969    4013 request.go:629] Waited for 198.293569ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:10:25.762009    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:10:25.762016    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:25.762027    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:25.762062    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:25.764659    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:25.765098    4013 pod_ready.go:92] pod "kube-controller-manager-ha-968000-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:25.765107    4013 pod_ready.go:81] duration metric: took 400.096353ms for pod "kube-controller-manager-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:25.765120    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:25.961382    4013 request.go:629] Waited for 196.226504ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m03
	I0805 16:10:25.961416    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m03
	I0805 16:10:25.961422    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:25.961434    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:25.961446    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:25.963534    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:26.162364    4013 request.go:629] Waited for 198.280605ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:26.162397    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:26.162402    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:26.162408    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:26.162412    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:26.164357    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:26.362197    4013 request.go:629] Waited for 94.915828ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m03
	I0805 16:10:26.362260    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m03
	I0805 16:10:26.362266    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:26.362273    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:26.362276    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:26.364350    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:26.562545    4013 request.go:629] Waited for 197.745091ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:26.562624    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:26.562630    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:26.562637    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:26.562640    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:26.565319    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:26.767236    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m03
	I0805 16:10:26.767251    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:26.767257    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:26.767262    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:26.769341    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:26.962089    4013 request.go:629] Waited for 192.24367ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:26.962162    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:26.962168    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:26.962175    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:26.962178    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:26.964212    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:27.267240    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m03
	I0805 16:10:27.267258    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:27.267266    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:27.267270    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:27.269879    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:27.362824    4013 request.go:629] Waited for 92.466824ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:27.362855    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:27.362861    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:27.362867    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:27.362873    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:27.364886    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:27.365316    4013 pod_ready.go:92] pod "kube-controller-manager-ha-968000-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:27.365326    4013 pod_ready.go:81] duration metric: took 1.600199608s for pod "kube-controller-manager-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:27.365333    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fvd5q" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:27.562545    4013 request.go:629] Waited for 197.173723ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvd5q
	I0805 16:10:27.562641    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvd5q
	I0805 16:10:27.562650    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:27.562667    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:27.562672    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:27.564919    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:27.762505    4013 request.go:629] Waited for 197.212423ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:10:27.762538    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:10:27.762543    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:27.762549    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:27.762554    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:27.764932    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:27.765395    4013 pod_ready.go:92] pod "kube-proxy-fvd5q" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:27.765405    4013 pod_ready.go:81] duration metric: took 400.066585ms for pod "kube-proxy-fvd5q" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:27.765413    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p4xgk" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:27.962081    4013 request.go:629] Waited for 196.624809ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p4xgk
	I0805 16:10:27.962208    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p4xgk
	I0805 16:10:27.962219    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:27.962231    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:27.962265    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:27.965643    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:28.161558    4013 request.go:629] Waited for 195.152397ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:28.161641    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:28.161650    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:28.161658    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:28.161662    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:28.164062    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:28.164477    4013 pod_ready.go:92] pod "kube-proxy-p4xgk" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:28.164486    4013 pod_ready.go:81] duration metric: took 399.068204ms for pod "kube-proxy-p4xgk" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:28.164494    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qptt6" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:28.362129    4013 request.go:629] Waited for 197.598336ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qptt6
	I0805 16:10:28.362162    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qptt6
	I0805 16:10:28.362167    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:28.362173    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:28.362177    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:28.364194    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:28.561667    4013 request.go:629] Waited for 196.999586ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m04
	I0805 16:10:28.561700    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m04
	I0805 16:10:28.561748    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:28.561756    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:28.561759    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:28.564274    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:28.564561    4013 pod_ready.go:97] node "ha-968000-m04" hosting pod "kube-proxy-qptt6" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-968000-m04" has status "Ready":"Unknown"
	I0805 16:10:28.564573    4013 pod_ready.go:81] duration metric: took 400.073458ms for pod "kube-proxy-qptt6" in "kube-system" namespace to be "Ready" ...
	E0805 16:10:28.564580    4013 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-968000-m04" hosting pod "kube-proxy-qptt6" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-968000-m04" has status "Ready":"Unknown"
	I0805 16:10:28.564585    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v87jb" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:28.761155    4013 request.go:629] Waited for 196.536425ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v87jb
	I0805 16:10:28.761194    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v87jb
	I0805 16:10:28.761220    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:28.761235    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:28.761241    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:28.763501    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:28.962341    4013 request.go:629] Waited for 198.29849ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:28.962395    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:28.962429    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:28.962455    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:28.962470    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:28.965239    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:28.965595    4013 pod_ready.go:92] pod "kube-proxy-v87jb" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:28.965603    4013 pod_ready.go:81] duration metric: took 401.013479ms for pod "kube-proxy-v87jb" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:28.965611    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:29.161737    4013 request.go:629] Waited for 196.060247ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000
	I0805 16:10:29.161876    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000
	I0805 16:10:29.161889    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:29.161901    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:29.161907    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:29.165617    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:29.361022    4013 request.go:629] Waited for 194.748045ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:29.361106    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:29.361115    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:29.361123    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:29.361133    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:29.363092    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:29.363445    4013 pod_ready.go:92] pod "kube-scheduler-ha-968000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:29.363455    4013 pod_ready.go:81] duration metric: took 397.839229ms for pod "kube-scheduler-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:29.363462    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:29.562518    4013 request.go:629] Waited for 199.009741ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000-m02
	I0805 16:10:29.562602    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000-m02
	I0805 16:10:29.562608    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:29.562616    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:29.562621    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:29.565612    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:29.761127    4013 request.go:629] Waited for 195.236074ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:10:29.761159    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:10:29.761163    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:29.761169    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:29.761174    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:29.763545    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:29.764045    4013 pod_ready.go:92] pod "kube-scheduler-ha-968000-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:29.764056    4013 pod_ready.go:81] duration metric: took 400.588926ms for pod "kube-scheduler-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:29.764063    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:29.961261    4013 request.go:629] Waited for 197.156425ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000-m03
	I0805 16:10:29.961356    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000-m03
	I0805 16:10:29.961365    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:29.961373    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:29.961379    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:29.963937    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:30.162354    4013 request.go:629] Waited for 197.925421ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:30.162411    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:30.162422    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:30.162485    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:30.162494    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:30.165503    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:30.166291    4013 pod_ready.go:92] pod "kube-scheduler-ha-968000-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:30.166300    4013 pod_ready.go:81] duration metric: took 402.232052ms for pod "kube-scheduler-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:30.166308    4013 pod_ready.go:38] duration metric: took 20.225431391s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:10:30.166322    4013 api_server.go:52] waiting for apiserver process to appear ...
	I0805 16:10:30.166373    4013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:10:30.178781    4013 api_server.go:72] duration metric: took 20.408716061s to wait for apiserver process to appear ...
	I0805 16:10:30.178794    4013 api_server.go:88] waiting for apiserver healthz status ...
	I0805 16:10:30.178806    4013 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0805 16:10:30.181777    4013 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0805 16:10:30.181817    4013 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0805 16:10:30.181822    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:30.181828    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:30.181832    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:30.182461    4013 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:10:30.182514    4013 api_server.go:141] control plane version: v1.30.3
	I0805 16:10:30.182522    4013 api_server.go:131] duration metric: took 3.723541ms to wait for apiserver health ...
	I0805 16:10:30.182527    4013 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 16:10:30.361346    4013 request.go:629] Waited for 178.775767ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0805 16:10:30.361395    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0805 16:10:30.361407    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:30.361483    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:30.361495    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:30.367528    4013 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0805 16:10:30.373218    4013 system_pods.go:59] 26 kube-system pods found
	I0805 16:10:30.373231    4013 system_pods.go:61] "coredns-7db6d8ff4d-hjp5z" [e31fd97b-2727-4db3-a17c-3302c320832b] Running
	I0805 16:10:30.373242    4013 system_pods.go:61] "coredns-7db6d8ff4d-mfzln" [ea5c136e-84a6-4253-8f61-85c427b83840] Running
	I0805 16:10:30.373246    4013 system_pods.go:61] "etcd-ha-968000" [24590478-199e-4d78-8312-3d5924d6e915] Running
	I0805 16:10:30.373249    4013 system_pods.go:61] "etcd-ha-968000-m02" [cefe6f5a-3a87-4ccf-9419-0b864275c9c9] Running
	I0805 16:10:30.373253    4013 system_pods.go:61] "etcd-ha-968000-m03" [ec752887-5a12-4888-ba88-3fb5d54c6ce7] Running
	I0805 16:10:30.373255    4013 system_pods.go:61] "kindnet-5dshm" [2641d2a9-a26a-4cbe-b8ea-99ed7c7af43c] Running
	I0805 16:10:30.373258    4013 system_pods.go:61] "kindnet-cglm9" [80a5d2ca-3d9f-4347-bb68-cd6eac4e4aa8] Running
	I0805 16:10:30.373261    4013 system_pods.go:61] "kindnet-fp5ns" [bf9c4454-9491-4a21-8f0a-6c6f21919551] Running
	I0805 16:10:30.373267    4013 system_pods.go:61] "kindnet-qh6l6" [382ac149-5a4e-4fe4-aaaa-9c929c93b101] Running
	I0805 16:10:30.373270    4013 system_pods.go:61] "kube-apiserver-ha-968000" [04e9a721-eb6e-47b4-a7f0-2cad1ee201f7] Running
	I0805 16:10:30.373272    4013 system_pods.go:61] "kube-apiserver-ha-968000-m02" [0465a825-6697-4a98-bb88-18df7929a5dd] Running
	I0805 16:10:30.373275    4013 system_pods.go:61] "kube-apiserver-ha-968000-m03" [a0d3fc83-9820-463e-81bb-2abcb1b4c868] Running
	I0805 16:10:30.373278    4013 system_pods.go:61] "kube-controller-manager-ha-968000" [2078d070-21b4-4d47-a4d3-b130fa8b3aaf] Running
	I0805 16:10:30.373280    4013 system_pods.go:61] "kube-controller-manager-ha-968000-m02" [f0a1cc06-05bb-4efa-9a53-ebccba2b5f9e] Running
	I0805 16:10:30.373283    4013 system_pods.go:61] "kube-controller-manager-ha-968000-m03" [d140abba-93f2-4062-8ee8-3918ff5ae882] Running
	I0805 16:10:30.373286    4013 system_pods.go:61] "kube-proxy-fvd5q" [f2f13535-5802-4a1c-8243-48de42b79e74] Running
	I0805 16:10:30.373290    4013 system_pods.go:61] "kube-proxy-p4xgk" [aaca6036-f95c-44fb-a358-5ac881148fa4] Running
	I0805 16:10:30.373293    4013 system_pods.go:61] "kube-proxy-qptt6" [a826a636-1d05-4cca-a56d-d25a9cf41506] Running
	I0805 16:10:30.373296    4013 system_pods.go:61] "kube-proxy-v87jb" [d98f61ac-3a61-452c-8507-7258a9703c15] Running
	I0805 16:10:30.373298    4013 system_pods.go:61] "kube-scheduler-ha-968000" [20bf4b5e-71a1-4708-bb6a-34b0e44f196d] Running
	I0805 16:10:30.373301    4013 system_pods.go:61] "kube-scheduler-ha-968000-m02" [e590d5bf-9517-433b-9759-5b0f16cfe9a9] Running
	I0805 16:10:30.373303    4013 system_pods.go:61] "kube-scheduler-ha-968000-m03" [91120005-f0b0-47d5-a91c-c06b12e6da3e] Running
	I0805 16:10:30.373306    4013 system_pods.go:61] "kube-vip-ha-968000" [ac1aab33-b1d7-4b08-bde4-1bbd87c671f6] Running
	I0805 16:10:30.373308    4013 system_pods.go:61] "kube-vip-ha-968000-m02" [713fc36a-5582-464c-82d3-02905c81b753] Running
	I0805 16:10:30.373311    4013 system_pods.go:61] "kube-vip-ha-968000-m03" [d94a7e1c-9ddd-4229-b4cd-ac05384dd20a] Running
	I0805 16:10:30.373315    4013 system_pods.go:61] "storage-provisioner" [52e2952a-756d-4f65-84f5-588cb6563297] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0805 16:10:30.373320    4013 system_pods.go:74] duration metric: took 190.788685ms to wait for pod list to return data ...
	I0805 16:10:30.373327    4013 default_sa.go:34] waiting for default service account to be created ...
	I0805 16:10:30.561033    4013 request.go:629] Waited for 187.657545ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:10:30.561084    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:10:30.561123    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:30.561138    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:30.561146    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:30.564680    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:30.564786    4013 default_sa.go:45] found service account: "default"
	I0805 16:10:30.564796    4013 default_sa.go:55] duration metric: took 191.464074ms for default service account to be created ...
	I0805 16:10:30.564801    4013 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 16:10:30.761949    4013 request.go:629] Waited for 197.098715ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0805 16:10:30.762013    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0805 16:10:30.762021    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:30.762029    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:30.762035    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:30.768776    4013 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0805 16:10:30.774173    4013 system_pods.go:86] 26 kube-system pods found
	I0805 16:10:30.774191    4013 system_pods.go:89] "coredns-7db6d8ff4d-hjp5z" [e31fd97b-2727-4db3-a17c-3302c320832b] Running
	I0805 16:10:30.774196    4013 system_pods.go:89] "coredns-7db6d8ff4d-mfzln" [ea5c136e-84a6-4253-8f61-85c427b83840] Running
	I0805 16:10:30.774200    4013 system_pods.go:89] "etcd-ha-968000" [24590478-199e-4d78-8312-3d5924d6e915] Running
	I0805 16:10:30.774203    4013 system_pods.go:89] "etcd-ha-968000-m02" [cefe6f5a-3a87-4ccf-9419-0b864275c9c9] Running
	I0805 16:10:30.774207    4013 system_pods.go:89] "etcd-ha-968000-m03" [ec752887-5a12-4888-ba88-3fb5d54c6ce7] Running
	I0805 16:10:30.774211    4013 system_pods.go:89] "kindnet-5dshm" [2641d2a9-a26a-4cbe-b8ea-99ed7c7af43c] Running
	I0805 16:10:30.774214    4013 system_pods.go:89] "kindnet-cglm9" [80a5d2ca-3d9f-4347-bb68-cd6eac4e4aa8] Running
	I0805 16:10:30.774219    4013 system_pods.go:89] "kindnet-fp5ns" [bf9c4454-9491-4a21-8f0a-6c6f21919551] Running
	I0805 16:10:30.774222    4013 system_pods.go:89] "kindnet-qh6l6" [382ac149-5a4e-4fe4-aaaa-9c929c93b101] Running
	I0805 16:10:30.774225    4013 system_pods.go:89] "kube-apiserver-ha-968000" [04e9a721-eb6e-47b4-a7f0-2cad1ee201f7] Running
	I0805 16:10:30.774229    4013 system_pods.go:89] "kube-apiserver-ha-968000-m02" [0465a825-6697-4a98-bb88-18df7929a5dd] Running
	I0805 16:10:30.774232    4013 system_pods.go:89] "kube-apiserver-ha-968000-m03" [a0d3fc83-9820-463e-81bb-2abcb1b4c868] Running
	I0805 16:10:30.774236    4013 system_pods.go:89] "kube-controller-manager-ha-968000" [2078d070-21b4-4d47-a4d3-b130fa8b3aaf] Running
	I0805 16:10:30.774240    4013 system_pods.go:89] "kube-controller-manager-ha-968000-m02" [f0a1cc06-05bb-4efa-9a53-ebccba2b5f9e] Running
	I0805 16:10:30.774243    4013 system_pods.go:89] "kube-controller-manager-ha-968000-m03" [d140abba-93f2-4062-8ee8-3918ff5ae882] Running
	I0805 16:10:30.774246    4013 system_pods.go:89] "kube-proxy-fvd5q" [f2f13535-5802-4a1c-8243-48de42b79e74] Running
	I0805 16:10:30.774250    4013 system_pods.go:89] "kube-proxy-p4xgk" [aaca6036-f95c-44fb-a358-5ac881148fa4] Running
	I0805 16:10:30.774253    4013 system_pods.go:89] "kube-proxy-qptt6" [a826a636-1d05-4cca-a56d-d25a9cf41506] Running
	I0805 16:10:30.774257    4013 system_pods.go:89] "kube-proxy-v87jb" [d98f61ac-3a61-452c-8507-7258a9703c15] Running
	I0805 16:10:30.774261    4013 system_pods.go:89] "kube-scheduler-ha-968000" [20bf4b5e-71a1-4708-bb6a-34b0e44f196d] Running
	I0805 16:10:30.774265    4013 system_pods.go:89] "kube-scheduler-ha-968000-m02" [e590d5bf-9517-433b-9759-5b0f16cfe9a9] Running
	I0805 16:10:30.774268    4013 system_pods.go:89] "kube-scheduler-ha-968000-m03" [91120005-f0b0-47d5-a91c-c06b12e6da3e] Running
	I0805 16:10:30.774271    4013 system_pods.go:89] "kube-vip-ha-968000" [ac1aab33-b1d7-4b08-bde4-1bbd87c671f6] Running
	I0805 16:10:30.774275    4013 system_pods.go:89] "kube-vip-ha-968000-m02" [713fc36a-5582-464c-82d3-02905c81b753] Running
	I0805 16:10:30.774281    4013 system_pods.go:89] "kube-vip-ha-968000-m03" [d94a7e1c-9ddd-4229-b4cd-ac05384dd20a] Running
	I0805 16:10:30.774287    4013 system_pods.go:89] "storage-provisioner" [52e2952a-756d-4f65-84f5-588cb6563297] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0805 16:10:30.774292    4013 system_pods.go:126] duration metric: took 209.48655ms to wait for k8s-apps to be running ...
	I0805 16:10:30.774299    4013 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 16:10:30.774355    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:10:30.784922    4013 system_svc.go:56] duration metric: took 10.617828ms WaitForService to wait for kubelet
	I0805 16:10:30.784940    4013 kubeadm.go:582] duration metric: took 21.014875463s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:10:30.784959    4013 node_conditions.go:102] verifying NodePressure condition ...
	I0805 16:10:30.960928    4013 request.go:629] Waited for 175.930639ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0805 16:10:30.960954    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0805 16:10:30.960958    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:30.960965    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:30.960969    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:30.963520    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:30.964254    4013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:10:30.964263    4013 node_conditions.go:123] node cpu capacity is 2
	I0805 16:10:30.964270    4013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:10:30.964274    4013 node_conditions.go:123] node cpu capacity is 2
	I0805 16:10:30.964278    4013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:10:30.964281    4013 node_conditions.go:123] node cpu capacity is 2
	I0805 16:10:30.964284    4013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:10:30.964287    4013 node_conditions.go:123] node cpu capacity is 2
	I0805 16:10:30.964290    4013 node_conditions.go:105] duration metric: took 179.327419ms to run NodePressure ...
	I0805 16:10:30.964299    4013 start.go:241] waiting for startup goroutines ...
	I0805 16:10:30.964314    4013 start.go:255] writing updated cluster config ...
	I0805 16:10:30.985934    4013 out.go:177] 
	I0805 16:10:31.006970    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:10:31.007089    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:10:31.028647    4013 out.go:177] * Starting "ha-968000-m04" worker node in "ha-968000" cluster
	I0805 16:10:31.070449    4013 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:10:31.070470    4013 cache.go:56] Caching tarball of preloaded images
	I0805 16:10:31.070587    4013 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:10:31.070597    4013 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:10:31.070661    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:10:31.071212    4013 start.go:360] acquireMachinesLock for ha-968000-m04: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:10:31.071274    4013 start.go:364] duration metric: took 48.958µs to acquireMachinesLock for "ha-968000-m04"
	I0805 16:10:31.071288    4013 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:10:31.071292    4013 fix.go:54] fixHost starting: m04
	I0805 16:10:31.071532    4013 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:10:31.071551    4013 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:10:31.080682    4013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51965
	I0805 16:10:31.081033    4013 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:10:31.081390    4013 main.go:141] libmachine: Using API Version  1
	I0805 16:10:31.081404    4013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:10:31.081602    4013 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:10:31.081699    4013 main.go:141] libmachine: (ha-968000-m04) Calling .DriverName
	I0805 16:10:31.081797    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetState
	I0805 16:10:31.081874    4013 main.go:141] libmachine: (ha-968000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:10:31.081960    4013 main.go:141] libmachine: (ha-968000-m04) DBG | hyperkit pid from json: 3587
	I0805 16:10:31.082940    4013 main.go:141] libmachine: (ha-968000-m04) DBG | hyperkit pid 3587 missing from process table
	I0805 16:10:31.082969    4013 fix.go:112] recreateIfNeeded on ha-968000-m04: state=Stopped err=<nil>
	I0805 16:10:31.082980    4013 main.go:141] libmachine: (ha-968000-m04) Calling .DriverName
	W0805 16:10:31.083071    4013 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:10:31.103629    4013 out.go:177] * Restarting existing hyperkit VM for "ha-968000-m04" ...
	I0805 16:10:31.144437    4013 main.go:141] libmachine: (ha-968000-m04) Calling .Start
	I0805 16:10:31.144560    4013 main.go:141] libmachine: (ha-968000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:10:31.144576    4013 main.go:141] libmachine: (ha-968000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/hyperkit.pid
	I0805 16:10:31.144624    4013 main.go:141] libmachine: (ha-968000-m04) DBG | Using UUID a18c3228-c5cd-4311-88be-5c31f452a5bc
	I0805 16:10:31.170211    4013 main.go:141] libmachine: (ha-968000-m04) DBG | Generated MAC 2e:80:64:4a:6a:1a
	I0805 16:10:31.170234    4013 main.go:141] libmachine: (ha-968000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000
	I0805 16:10:31.170385    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a18c3228-c5cd-4311-88be-5c31f452a5bc", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002ad770)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:10:31.170420    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a18c3228-c5cd-4311-88be-5c31f452a5bc", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002ad770)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:10:31.170473    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "a18c3228-c5cd-4311-88be-5c31f452a5bc", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/ha-968000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machine
s/ha-968000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000"}
	I0805 16:10:31.170506    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U a18c3228-c5cd-4311-88be-5c31f452a5bc -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/ha-968000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000"
	I0805 16:10:31.170534    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:10:31.171899    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 DEBUG: hyperkit: Pid is 4076
	I0805 16:10:31.172381    4013 main.go:141] libmachine: (ha-968000-m04) DBG | Attempt 0
	I0805 16:10:31.172398    4013 main.go:141] libmachine: (ha-968000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:10:31.172450    4013 main.go:141] libmachine: (ha-968000-m04) DBG | hyperkit pid from json: 4076
	I0805 16:10:31.173609    4013 main.go:141] libmachine: (ha-968000-m04) DBG | Searching for 2e:80:64:4a:6a:1a in /var/db/dhcpd_leases ...
	I0805 16:10:31.173677    4013 main.go:141] libmachine: (ha-968000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0805 16:10:31.173696    4013 main.go:141] libmachine: (ha-968000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b2ad30}
	I0805 16:10:31.173728    4013 main.go:141] libmachine: (ha-968000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:10:31.173759    4013 main.go:141] libmachine: (ha-968000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2acfd}
	I0805 16:10:31.173793    4013 main.go:141] libmachine: (ha-968000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b15b5a}
	I0805 16:10:31.173811    4013 main.go:141] libmachine: (ha-968000-m04) DBG | Found match: 2e:80:64:4a:6a:1a
	I0805 16:10:31.173825    4013 main.go:141] libmachine: (ha-968000-m04) DBG | IP: 192.169.0.8
	I0805 16:10:31.173829    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetConfigRaw
	I0805 16:10:31.174658    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetIP
	I0805 16:10:31.174867    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:10:31.175539    4013 machine.go:94] provisionDockerMachine start ...
	I0805 16:10:31.175554    4013 main.go:141] libmachine: (ha-968000-m04) Calling .DriverName
	I0805 16:10:31.175674    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:10:31.175766    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:10:31.175918    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:10:31.176065    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:10:31.176193    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:10:31.176341    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:10:31.176494    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0805 16:10:31.176502    4013 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 16:10:31.179979    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:10:31.189022    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:10:31.190141    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:10:31.190167    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:10:31.190183    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:10:31.190196    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:10:31.578293    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:10:31.578309    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:10:31.693368    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:10:31.693393    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:10:31.693424    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:10:31.693448    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:10:31.694196    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:10:31.694209    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:10:37.416235    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:10:37.416360    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:10:37.416373    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:10:37.440251    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:37 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:11:06.247173    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 16:11:06.247187    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetMachineName
	I0805 16:11:06.247309    4013 buildroot.go:166] provisioning hostname "ha-968000-m04"
	I0805 16:11:06.247318    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetMachineName
	I0805 16:11:06.247423    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:06.247508    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:06.247594    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.247671    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.247772    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:06.247899    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:11:06.248060    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0805 16:11:06.248068    4013 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-968000-m04 && echo "ha-968000-m04" | sudo tee /etc/hostname
	I0805 16:11:06.317371    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-968000-m04
	
	I0805 16:11:06.317388    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:06.317526    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:06.317622    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.317715    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.317808    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:06.317937    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:11:06.318101    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0805 16:11:06.318113    4013 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-968000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-968000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-968000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:11:06.382855    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:11:06.382871    4013 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:11:06.382888    4013 buildroot.go:174] setting up certificates
	I0805 16:11:06.382895    4013 provision.go:84] configureAuth start
	I0805 16:11:06.382903    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetMachineName
	I0805 16:11:06.383053    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetIP
	I0805 16:11:06.383164    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:06.383233    4013 provision.go:143] copyHostCerts
	I0805 16:11:06.383260    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:11:06.383324    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:11:06.383330    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:11:06.383467    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:11:06.383688    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:11:06.383735    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:11:06.383741    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:11:06.383821    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:11:06.383965    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:11:06.384005    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:11:06.384009    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:11:06.384091    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:11:06.384243    4013 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.ha-968000-m04 san=[127.0.0.1 192.169.0.8 ha-968000-m04 localhost minikube]
	I0805 16:11:06.441247    4013 provision.go:177] copyRemoteCerts
	I0805 16:11:06.441333    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:11:06.441360    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:06.441582    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:06.441714    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.441797    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:06.441875    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/id_rsa Username:docker}
	I0805 16:11:06.478976    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:11:06.479045    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:11:06.498620    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:11:06.498698    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0805 16:11:06.519415    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:11:06.519486    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:11:06.539397    4013 provision.go:87] duration metric: took 156.493754ms to configureAuth
	I0805 16:11:06.539413    4013 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:11:06.539605    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:11:06.539618    4013 main.go:141] libmachine: (ha-968000-m04) Calling .DriverName
	I0805 16:11:06.539752    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:06.539832    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:06.539911    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.540002    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.540090    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:06.540207    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:11:06.540372    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0805 16:11:06.540380    4013 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:11:06.599043    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:11:06.599055    4013 buildroot.go:70] root file system type: tmpfs
	I0805 16:11:06.599124    4013 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:11:06.599137    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:06.599263    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:06.599347    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.599450    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.599542    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:06.599675    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:11:06.599808    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0805 16:11:06.599855    4013 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:11:06.668751    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	Environment=NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:11:06.668771    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:06.668901    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:06.669001    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.669105    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.669186    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:06.669346    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:11:06.669490    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0805 16:11:06.669502    4013 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:11:08.250301    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:11:08.250316    4013 machine.go:97] duration metric: took 37.074755145s to provisionDockerMachine
	I0805 16:11:08.250324    4013 start.go:293] postStartSetup for "ha-968000-m04" (driver="hyperkit")
	I0805 16:11:08.250332    4013 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:11:08.250344    4013 main.go:141] libmachine: (ha-968000-m04) Calling .DriverName
	I0805 16:11:08.250520    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:11:08.250533    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:08.250626    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:08.250720    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:08.250813    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:08.250900    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/id_rsa Username:docker}
	I0805 16:11:08.286575    4013 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:11:08.289665    4013 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:11:08.289683    4013 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:11:08.289795    4013 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:11:08.289976    4013 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:11:08.289983    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:11:08.290190    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:11:08.297566    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:11:08.317678    4013 start.go:296] duration metric: took 67.345639ms for postStartSetup
	I0805 16:11:08.317700    4013 main.go:141] libmachine: (ha-968000-m04) Calling .DriverName
	I0805 16:11:08.317862    4013 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0805 16:11:08.317884    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:08.317967    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:08.318053    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:08.318144    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:08.318232    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/id_rsa Username:docker}
	I0805 16:11:08.353636    4013 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0805 16:11:08.353694    4013 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0805 16:11:08.385358    4013 fix.go:56] duration metric: took 37.314050272s for fixHost
	I0805 16:11:08.385384    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:08.385514    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:08.385605    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:08.385692    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:08.385761    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:08.385881    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:11:08.386024    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0805 16:11:08.386032    4013 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 16:11:08.446465    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722899468.587788631
	
	I0805 16:11:08.446479    4013 fix.go:216] guest clock: 1722899468.587788631
	I0805 16:11:08.446484    4013 fix.go:229] Guest: 2024-08-05 16:11:08.587788631 -0700 PDT Remote: 2024-08-05 16:11:08.385373 -0700 PDT m=+152.742754663 (delta=202.415631ms)
	I0805 16:11:08.446495    4013 fix.go:200] guest clock delta is within tolerance: 202.415631ms
	I0805 16:11:08.446499    4013 start.go:83] releasing machines lock for "ha-968000-m04", held for 37.375207026s
	I0805 16:11:08.446517    4013 main.go:141] libmachine: (ha-968000-m04) Calling .DriverName
	I0805 16:11:08.446647    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetIP
	I0805 16:11:08.469183    4013 out.go:177] * Found network options:
	I0805 16:11:08.489020    4013 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	W0805 16:11:08.509956    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	W0805 16:11:08.509981    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	W0805 16:11:08.509995    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:11:08.510012    4013 main.go:141] libmachine: (ha-968000-m04) Calling .DriverName
	I0805 16:11:08.510694    4013 main.go:141] libmachine: (ha-968000-m04) Calling .DriverName
	I0805 16:11:08.510902    4013 main.go:141] libmachine: (ha-968000-m04) Calling .DriverName
	I0805 16:11:08.510988    4013 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:11:08.511021    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	W0805 16:11:08.511083    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	W0805 16:11:08.511098    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	W0805 16:11:08.511109    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:11:08.511171    4013 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 16:11:08.511183    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:08.511199    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:08.511320    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:08.511356    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:08.511475    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:08.511503    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:08.511579    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/id_rsa Username:docker}
	I0805 16:11:08.511613    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:08.511730    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/id_rsa Username:docker}
	W0805 16:11:08.544454    4013 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:11:08.544519    4013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:11:08.559248    4013 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:11:08.559269    4013 start.go:495] detecting cgroup driver to use...
	I0805 16:11:08.559342    4013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:11:08.597200    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:11:08.605403    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:11:08.613387    4013 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:11:08.613447    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:11:08.621571    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:11:08.629943    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:11:08.638060    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:11:08.646402    4013 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:11:08.654807    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:11:08.662991    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:11:08.671582    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:11:08.680942    4013 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:11:08.688339    4013 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:11:08.695737    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:11:08.798441    4013 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:11:08.816137    4013 start.go:495] detecting cgroup driver to use...
	I0805 16:11:08.816215    4013 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:11:08.835716    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:11:08.847518    4013 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:11:08.867990    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:11:08.879695    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:11:08.890752    4013 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:11:08.914456    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:11:08.925541    4013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:11:08.941237    4013 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:11:08.944245    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:11:08.952235    4013 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:11:08.965768    4013 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:11:09.067675    4013 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:11:09.170165    4013 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:11:09.170197    4013 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:11:09.184139    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:11:09.281548    4013 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:12:10.328097    4013 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.046493334s)
	I0805 16:12:10.328204    4013 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0805 16:12:10.365222    4013 out.go:177] 
	W0805 16:12:10.386312    4013 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 05 23:11:06 ha-968000-m04 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:11:06 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:06.389189042Z" level=info msg="Starting up"
	Aug 05 23:11:06 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:06.389663926Z" level=info msg="containerd not running, starting managed containerd"
	Aug 05 23:11:06 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:06.390143336Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=518
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.408369770Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423348772Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423404929Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423454269Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423464665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423632943Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423651369Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423774064Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423808885Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423821728Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423829007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423935968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.424118672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.425786619Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.425825910Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.425936027Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.425969728Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.426078806Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.426121396Z" level=info msg="metadata content store policy set" policy=shared
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.427587891Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.427669563Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.427705862Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.427719084Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.427779644Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.427908991Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428136864Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428235911Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428270099Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428282071Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428290976Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428299125Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428313845Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428325716Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428339937Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428355366Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428366031Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428374178Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428386784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428406973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428418331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428429739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428438142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428446212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428453990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428461755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428469955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428479423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428486756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428506619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428545500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428559198Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428573033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428581795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428589599Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428635221Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428670612Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428680617Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428689626Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428696156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428800505Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428839684Z" level=info msg="NRI interface is disabled by configuration."
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.429026394Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.429145595Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.429201340Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.429234250Z" level=info msg="containerd successfully booted in 0.021734s"
	Aug 05 23:11:07 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:07.407781552Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 05 23:11:07 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:07.418738721Z" level=info msg="Loading containers: start."
	Aug 05 23:11:07 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:07.516865232Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 05 23:11:07 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:07.582390999Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 05 23:11:08 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:08.356499605Z" level=info msg="Loading containers: done."
	Aug 05 23:11:08 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:08.366049745Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 05 23:11:08 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:08.366234171Z" level=info msg="Daemon has completed initialization"
	Aug 05 23:11:08 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:08.390065153Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 05 23:11:08 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:08.390220880Z" level=info msg="API listen on [::]:2376"
	Aug 05 23:11:08 ha-968000-m04 systemd[1]: Started Docker Application Container Engine.
	Aug 05 23:11:09 ha-968000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Aug 05 23:11:09 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:09.434256146Z" level=info msg="Processing signal 'terminated'"
	Aug 05 23:11:09 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:09.435568971Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 05 23:11:09 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:09.435927759Z" level=info msg="Daemon shutdown complete"
	Aug 05 23:11:09 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:09.436029566Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 05 23:11:09 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:09.436215589Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 05 23:11:10 ha-968000-m04 systemd[1]: docker.service: Deactivated successfully.
	Aug 05 23:11:10 ha-968000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Aug 05 23:11:10 ha-968000-m04 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:11:10 ha-968000-m04 dockerd[1111]: time="2024-08-05T23:11:10.480077702Z" level=info msg="Starting up"
	Aug 05 23:12:10 ha-968000-m04 dockerd[1111]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 05 23:12:10 ha-968000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 05 23:12:10 ha-968000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 05 23:12:10 ha-968000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0805 16:12:10.386388    4013 out.go:239] * 
	W0805 16:12:10.387046    4013 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:12:10.449396    4013 out.go:177] 
	
	
	==> Docker <==
	Aug 05 23:09:56 ha-968000 dockerd[1146]: time="2024-08-05T23:09:56.374377051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:09:57 ha-968000 dockerd[1146]: time="2024-08-05T23:09:57.374383643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:09:57 ha-968000 dockerd[1146]: time="2024-08-05T23:09:57.374505237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:09:57 ha-968000 dockerd[1146]: time="2024-08-05T23:09:57.374519049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:09:57 ha-968000 dockerd[1146]: time="2024-08-05T23:09:57.374719774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:09:59 ha-968000 dockerd[1146]: time="2024-08-05T23:09:59.344050167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:09:59 ha-968000 dockerd[1146]: time="2024-08-05T23:09:59.344118579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:09:59 ha-968000 dockerd[1146]: time="2024-08-05T23:09:59.344128623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:09:59 ha-968000 dockerd[1146]: time="2024-08-05T23:09:59.344477096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:10:00 ha-968000 dockerd[1146]: time="2024-08-05T23:10:00.366625069Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:10:00 ha-968000 dockerd[1146]: time="2024-08-05T23:10:00.366693392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:10:00 ha-968000 dockerd[1146]: time="2024-08-05T23:10:00.366706812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:10:00 ha-968000 dockerd[1146]: time="2024-08-05T23:10:00.366787584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:10:22 ha-968000 dockerd[1146]: time="2024-08-05T23:10:22.371842451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:10:22 ha-968000 dockerd[1146]: time="2024-08-05T23:10:22.371961703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:10:22 ha-968000 dockerd[1146]: time="2024-08-05T23:10:22.371975627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:10:22 ha-968000 dockerd[1146]: time="2024-08-05T23:10:22.372138790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:10:26 ha-968000 dockerd[1140]: time="2024-08-05T23:10:26.510842611Z" level=info msg="ignoring event" container=cfccdb420519d323e32884587cbb2325493555960556f383b6b5243f23bf5672 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 05 23:10:26 ha-968000 dockerd[1146]: time="2024-08-05T23:10:26.511299602Z" level=info msg="shim disconnected" id=cfccdb420519d323e32884587cbb2325493555960556f383b6b5243f23bf5672 namespace=moby
	Aug 05 23:10:26 ha-968000 dockerd[1146]: time="2024-08-05T23:10:26.511337640Z" level=warning msg="cleaning up after shim disconnected" id=cfccdb420519d323e32884587cbb2325493555960556f383b6b5243f23bf5672 namespace=moby
	Aug 05 23:10:26 ha-968000 dockerd[1146]: time="2024-08-05T23:10:26.511345722Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 05 23:11:48 ha-968000 dockerd[1146]: time="2024-08-05T23:11:48.356819227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:11:48 ha-968000 dockerd[1146]: time="2024-08-05T23:11:48.357279209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:11:48 ha-968000 dockerd[1146]: time="2024-08-05T23:11:48.357395319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:11:48 ha-968000 dockerd[1146]: time="2024-08-05T23:11:48.357615482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	355fa38aecae1       6e38f40d628db                                                                                         23 seconds ago       Running             storage-provisioner       2                   1dbcc850389f8       storage-provisioner
	577258077df9f       cbb01a7bd410d                                                                                         About a minute ago   Running             coredns                   1                   391b901a0529c       coredns-7db6d8ff4d-mfzln
	63f8a4c2092da       cbb01a7bd410d                                                                                         2 minutes ago        Running             coredns                   1                   c850e00017450       coredns-7db6d8ff4d-hjp5z
	0193799bafd1a       917d7814b9b5b                                                                                         2 minutes ago        Running             kindnet-cni               1                   9dba72250058d       kindnet-qh6l6
	d72783d2d1ffb       8c811b4aec35f                                                                                         2 minutes ago        Running             busybox                   1                   32be004c80f6e       busybox-fc5497c4f-pxn97
	3a4ca38aa00af       55bb025d2cfa5                                                                                         2 minutes ago        Running             kube-proxy                1                   588ec8f41833a       kube-proxy-v87jb
	cfccdb420519d       6e38f40d628db                                                                                         2 minutes ago        Exited              storage-provisioner       1                   1dbcc850389f8       storage-provisioner
	5279a75fe7753       3861cfcd7c04c                                                                                         3 minutes ago        Running             etcd                      1                   5b34813274f1c       etcd-ha-968000
	513af177e332b       38af8ddebf499                                                                                         3 minutes ago        Running             kube-vip                  0                   ee4d5a2e10c9e       kube-vip-ha-968000
	b60d19a548167       1f6d574d502f3                                                                                         3 minutes ago        Running             kube-apiserver            1                   cf530a36471fd       kube-apiserver-ha-968000
	24b87a0c98dcc       76932a3b37d7e                                                                                         3 minutes ago        Running             kube-controller-manager   1                   9bb601d425aab       kube-controller-manager-ha-968000
	d830712616b7f       3edc18e7b7672                                                                                         3 minutes ago        Running             kube-scheduler            1                   8f8294dee2372       kube-scheduler-ha-968000
	cb7475c28d1f7       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   5 minutes ago        Exited              busybox                   0                   9732a9146dd0b       busybox-fc5497c4f-pxn97
	718ace635ea06       cbb01a7bd410d                                                                                         8 minutes ago        Exited              coredns                   0                   500832bd7de13       coredns-7db6d8ff4d-hjp5z
	08f1d5be6bd28       cbb01a7bd410d                                                                                         8 minutes ago        Exited              coredns                   0                   9fe7dedb16964       coredns-7db6d8ff4d-mfzln
	0eff729c401d3       kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3              8 minutes ago        Exited              kindnet-cni               0                   0675dd00ddb4e       kindnet-qh6l6
	236ffa329c7b4       55bb025d2cfa5                                                                                         8 minutes ago        Exited              kube-proxy                0                   20695b590fecf       kube-proxy-v87jb
	7aac4c03a731c       1f6d574d502f3                                                                                         9 minutes ago        Exited              kube-apiserver            0                   2cfee92cb7572       kube-apiserver-ha-968000
	66678698a7a8c       3edc18e7b7672                                                                                         9 minutes ago        Exited              kube-scheduler            0                   e8d1b1861c6fd       kube-scheduler-ha-968000
	17f0dc9ba8def       3861cfcd7c04c                                                                                         9 minutes ago        Exited              etcd                      0                   77ae5c7a9a48a       etcd-ha-968000
	794441de3f195       76932a3b37d7e                                                                                         9 minutes ago        Exited              kube-controller-manager   0                   bd03fad51648f       kube-controller-manager-ha-968000
	
	
	==> coredns [08f1d5be6bd2] <==
	[INFO] 10.244.2.2:43657 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000409083s
	[INFO] 10.244.1.2:55696 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00015406s
	[INFO] 10.244.1.2:41053 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.009559855s
	[INFO] 10.244.1.2:39691 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006956s
	[INFO] 10.244.1.2:59893 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008504s
	[INFO] 10.244.0.4:33214 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000082987s
	[INFO] 10.244.0.4:53796 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097087s
	[INFO] 10.244.0.4:47821 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082377s
	[INFO] 10.244.0.4:55897 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000029356s
	[INFO] 10.244.2.2:49761 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081825s
	[INFO] 10.244.2.2:58164 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106492s
	[INFO] 10.244.1.2:55164 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000087227s
	[INFO] 10.244.1.2:47300 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000047931s
	[INFO] 10.244.0.4:37289 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080578s
	[INFO] 10.244.2.2:42229 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100216s
	[INFO] 10.244.2.2:56584 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000066152s
	[INFO] 10.244.2.2:33160 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064701s
	[INFO] 10.244.2.2:52725 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010518s
	[INFO] 10.244.0.4:36176 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000096237s
	[INFO] 10.244.0.4:33211 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000082639s
	[INFO] 10.244.2.2:38034 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000097661s
	[INFO] 10.244.2.2:57513 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108796s
	[INFO] 10.244.2.2:33013 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000036818s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [577258077df9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:43827 - 29901 "HINFO IN 4580923541750251985.7631092243009977165. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011091367s
	
	
	==> coredns [63f8a4c2092d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:53228 - 19573 "HINFO IN 3833116979176979481.4354200100168845612. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011686072s
	
	
	==> coredns [718ace635ea0] <==
	[INFO] 10.244.1.2:52400 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099778s
	[INFO] 10.244.0.4:35456 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000059225s
	[INFO] 10.244.0.4:34314 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000107945s
	[INFO] 10.244.0.4:54779 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106466s
	[INFO] 10.244.0.4:58919 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00067383s
	[INFO] 10.244.2.2:54419 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000090016s
	[INFO] 10.244.2.2:54439 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000073949s
	[INFO] 10.244.2.2:46501 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000041344s
	[INFO] 10.244.2.2:41755 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000069101s
	[INFO] 10.244.2.2:51313 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000132647s
	[INFO] 10.244.2.2:37540 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073728s
	[INFO] 10.244.1.2:59563 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000125503s
	[INFO] 10.244.1.2:47682 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070898s
	[INFO] 10.244.0.4:41592 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088839s
	[INFO] 10.244.0.4:54512 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059642s
	[INFO] 10.244.0.4:57130 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080875s
	[INFO] 10.244.1.2:51262 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104244s
	[INFO] 10.244.1.2:34748 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000125796s
	[INFO] 10.244.1.2:40451 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119057s
	[INFO] 10.244.1.2:37514 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000090659s
	[INFO] 10.244.0.4:41185 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009175s
	[INFO] 10.244.0.4:34639 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000100906s
	[INFO] 10.244.2.2:55855 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000088544s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-968000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-968000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=ha-968000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T16_03_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:03:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-968000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:12:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:09:32 +0000   Mon, 05 Aug 2024 23:03:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:09:32 +0000   Mon, 05 Aug 2024 23:03:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:09:32 +0000   Mon, 05 Aug 2024 23:03:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:09:32 +0000   Mon, 05 Aug 2024 23:03:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-968000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 3d395b8f5a5645a29e265d49d4358791
	  System UUID:                a9f34e4f-0000-0000-b87b-350754bafb6d
	  Boot ID:                    d8c06632-4a4d-43d2-a7c9-eaf87fc4ce97
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pxn97              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  kube-system                 coredns-7db6d8ff4d-hjp5z             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m48s
	  kube-system                 coredns-7db6d8ff4d-mfzln             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m49s
	  kube-system                 etcd-ha-968000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m2s
	  kube-system                 kindnet-qh6l6                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m49s
	  kube-system                 kube-apiserver-ha-968000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m2s
	  kube-system                 kube-controller-manager-ha-968000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m2s
	  kube-system                 kube-proxy-v87jb                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m49s
	  kube-system                 kube-scheduler-ha-968000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m2s
	  kube-system                 kube-vip-ha-968000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m30s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m48s                  kube-proxy       
	  Normal  Starting                 2m16s                  kube-proxy       
	  Normal  Starting                 9m2s                   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m2s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m2s                   kubelet          Node ha-968000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m2s                   kubelet          Node ha-968000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m2s                   kubelet          Node ha-968000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           8m50s                  node-controller  Node ha-968000 event: Registered Node ha-968000 in Controller
	  Normal  NodeReady                8m30s                  kubelet          Node ha-968000 status is now: NodeReady
	  Normal  RegisteredNode           7m31s                  node-controller  Node ha-968000 event: Registered Node ha-968000 in Controller
	  Normal  RegisteredNode           6m14s                  node-controller  Node ha-968000 event: Registered Node ha-968000 in Controller
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-968000 event: Registered Node ha-968000 in Controller
	  Normal  NodeHasSufficientMemory  3m18s (x8 over 3m18s)  kubelet          Node ha-968000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 3m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    3m18s (x8 over 3m18s)  kubelet          Node ha-968000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m18s (x7 over 3m18s)  kubelet          Node ha-968000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m40s                  node-controller  Node ha-968000 event: Registered Node ha-968000 in Controller
	  Normal  RegisteredNode           2m37s                  node-controller  Node ha-968000 event: Registered Node ha-968000 in Controller
	  Normal  RegisteredNode           105s                   node-controller  Node ha-968000 event: Registered Node ha-968000 in Controller
	
	
	Name:               ha-968000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-968000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=ha-968000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T16_04_26_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:04:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-968000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:12:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:09:27 +0000   Mon, 05 Aug 2024 23:04:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:09:27 +0000   Mon, 05 Aug 2024 23:04:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:09:27 +0000   Mon, 05 Aug 2024 23:04:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:09:27 +0000   Mon, 05 Aug 2024 23:09:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-968000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 1d4b057c6b4e48f692755b6cf841ad9c
	  System UUID:                fe2b4f71-0000-0000-b597-390ca402ab71
	  Boot ID:                    7c73bd0f-a9d0-4153-aeb2-c06b5b51ba84
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-k62jp                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  kube-system                 etcd-ha-968000-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m46s
	  kube-system                 kindnet-fp5ns                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m48s
	  kube-system                 kube-apiserver-ha-968000-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m47s
	  kube-system                 kube-controller-manager-ha-968000-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m47s
	  kube-system                 kube-proxy-fvd5q                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m48s
	  kube-system                 kube-scheduler-ha-968000-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m47s
	  kube-system                 kube-vip-ha-968000-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m43s                  kube-proxy       
	  Normal   Starting                 4m22s                  kube-proxy       
	  Normal   Starting                 7m44s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  7m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  7m48s (x8 over 7m48s)  kubelet          Node ha-968000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m48s (x8 over 7m48s)  kubelet          Node ha-968000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m48s (x7 over 7m48s)  kubelet          Node ha-968000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m45s                  node-controller  Node ha-968000-m02 event: Registered Node ha-968000-m02 in Controller
	  Normal   RegisteredNode           7m31s                  node-controller  Node ha-968000-m02 event: Registered Node ha-968000-m02 in Controller
	  Normal   RegisteredNode           6m14s                  node-controller  Node ha-968000-m02 event: Registered Node ha-968000-m02 in Controller
	  Normal   Starting                 4m26s                  kubelet          Starting kubelet.
	  Warning  Rebooted                 4m26s                  kubelet          Node ha-968000-m02 has been rebooted, boot id: 7b95b6e8-f951-4164-8d86-82386ad49202
	  Normal   NodeAllocatableEnforced  4m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  4m26s (x2 over 4m26s)  kubelet          Node ha-968000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m26s (x2 over 4m26s)  kubelet          Node ha-968000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m26s (x2 over 4m26s)  kubelet          Node ha-968000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m9s                   node-controller  Node ha-968000-m02 event: Registered Node ha-968000-m02 in Controller
	  Normal   Starting                 2m59s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m59s (x8 over 2m59s)  kubelet          Node ha-968000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m59s (x8 over 2m59s)  kubelet          Node ha-968000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m59s (x7 over 2m59s)  kubelet          Node ha-968000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  2m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           2m40s                  node-controller  Node ha-968000-m02 event: Registered Node ha-968000-m02 in Controller
	  Normal   RegisteredNode           2m37s                  node-controller  Node ha-968000-m02 event: Registered Node ha-968000-m02 in Controller
	  Normal   RegisteredNode           105s                   node-controller  Node ha-968000-m02 event: Registered Node ha-968000-m02 in Controller
	
	
	Name:               ha-968000-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-968000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=ha-968000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T16_05_43_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:05:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-968000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:12:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:10:10 +0000   Mon, 05 Aug 2024 23:05:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:10:10 +0000   Mon, 05 Aug 2024 23:05:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:10:10 +0000   Mon, 05 Aug 2024 23:05:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:10:10 +0000   Mon, 05 Aug 2024 23:06:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.7
	  Hostname:    ha-968000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 2b9548005db94e2d9f36bc5ce968d504
	  System UUID:                2e5b4039-0000-0000-8bdc-5eded2ad114e
	  Boot ID:                    9f5076e5-8b5b-44d1-987a-bed21ddf5982
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-rmn5x                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  kube-system                 etcd-ha-968000-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m29s
	  kube-system                 kindnet-cglm9                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m31s
	  kube-system                 kube-apiserver-ha-968000-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	  kube-system                 kube-controller-manager-ha-968000-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	  kube-system                 kube-proxy-p4xgk                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m31s
	  kube-system                 kube-scheduler-ha-968000-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m30s
	  kube-system                 kube-vip-ha-968000-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 118s                   kube-proxy       
	  Normal   Starting                 6m28s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  6m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           6m31s                  node-controller  Node ha-968000-m03 event: Registered Node ha-968000-m03 in Controller
	  Normal   NodeHasSufficientMemory  6m31s (x8 over 6m31s)  kubelet          Node ha-968000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m31s (x8 over 6m31s)  kubelet          Node ha-968000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m31s (x7 over 6m31s)  kubelet          Node ha-968000-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m30s                  node-controller  Node ha-968000-m03 event: Registered Node ha-968000-m03 in Controller
	  Normal   RegisteredNode           6m14s                  node-controller  Node ha-968000-m03 event: Registered Node ha-968000-m03 in Controller
	  Normal   RegisteredNode           4m9s                   node-controller  Node ha-968000-m03 event: Registered Node ha-968000-m03 in Controller
	  Normal   RegisteredNode           2m40s                  node-controller  Node ha-968000-m03 event: Registered Node ha-968000-m03 in Controller
	  Normal   RegisteredNode           2m37s                  node-controller  Node ha-968000-m03 event: Registered Node ha-968000-m03 in Controller
	  Normal   Starting                 2m3s                   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m3s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m2s                   kubelet          Node ha-968000-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m2s                   kubelet          Node ha-968000-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m2s                   kubelet          Node ha-968000-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m2s                   kubelet          Node ha-968000-m03 has been rebooted, boot id: 9f5076e5-8b5b-44d1-987a-bed21ddf5982
	  Normal   RegisteredNode           105s                   node-controller  Node ha-968000-m03 event: Registered Node ha-968000-m03 in Controller
	
	
	Name:               ha-968000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-968000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=ha-968000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T16_06_39_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:06:38 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-968000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:07:59 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 05 Aug 2024 23:07:09 +0000   Mon, 05 Aug 2024 23:10:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 05 Aug 2024 23:07:09 +0000   Mon, 05 Aug 2024 23:10:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 05 Aug 2024 23:07:09 +0000   Mon, 05 Aug 2024 23:10:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 05 Aug 2024 23:07:09 +0000   Mon, 05 Aug 2024 23:10:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-968000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 387692c0920442adbb5b3caabbb94471
	  System UUID:                a18c4311-0000-0000-88be-5c31f452a5bc
	  Boot ID:                    7f2467e2-e07f-4b0a-8fd3-3fe64bcdd2ab
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5dshm       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m34s
	  kube-system                 kube-proxy-qptt6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m26s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    5m34s (x2 over 5m34s)  kubelet          Node ha-968000-m04 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           5m34s                  node-controller  Node ha-968000-m04 event: Registered Node ha-968000-m04 in Controller
	  Normal  NodeAllocatableEnforced  5m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     5m34s (x2 over 5m34s)  kubelet          Node ha-968000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  5m34s (x2 over 5m34s)  kubelet          Node ha-968000-m04 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           5m31s                  node-controller  Node ha-968000-m04 event: Registered Node ha-968000-m04 in Controller
	  Normal  RegisteredNode           5m30s                  node-controller  Node ha-968000-m04 event: Registered Node ha-968000-m04 in Controller
	  Normal  NodeReady                5m11s                  kubelet          Node ha-968000-m04 status is now: NodeReady
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-968000-m04 event: Registered Node ha-968000-m04 in Controller
	  Normal  RegisteredNode           2m40s                  node-controller  Node ha-968000-m04 event: Registered Node ha-968000-m04 in Controller
	  Normal  RegisteredNode           2m37s                  node-controller  Node ha-968000-m04 event: Registered Node ha-968000-m04 in Controller
	  Normal  NodeNotReady             119s                   node-controller  Node ha-968000-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           105s                   node-controller  Node ha-968000-m04 event: Registered Node ha-968000-m04 in Controller
	
	
	==> dmesg <==
	[  +0.035875] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.008042] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.687990] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007066] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.637880] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +1.424408] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +1.527207] systemd-fstab-generator[472]: Ignoring "noauto" option for root device
	[  +0.101400] systemd-fstab-generator[484]: Ignoring "noauto" option for root device
	[  +2.009836] systemd-fstab-generator[1065]: Ignoring "noauto" option for root device
	[  +0.058762] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.193065] systemd-fstab-generator[1105]: Ignoring "noauto" option for root device
	[  +0.103523] systemd-fstab-generator[1117]: Ignoring "noauto" option for root device
	[  +0.111824] systemd-fstab-generator[1131]: Ignoring "noauto" option for root device
	[  +2.468462] systemd-fstab-generator[1353]: Ignoring "noauto" option for root device
	[  +0.106476] systemd-fstab-generator[1365]: Ignoring "noauto" option for root device
	[  +0.116705] systemd-fstab-generator[1377]: Ignoring "noauto" option for root device
	[  +0.119469] systemd-fstab-generator[1392]: Ignoring "noauto" option for root device
	[  +0.468214] systemd-fstab-generator[1552]: Ignoring "noauto" option for root device
	[Aug 5 23:09] kauditd_printk_skb: 234 callbacks suppressed
	[ +41.984025] kauditd_printk_skb: 40 callbacks suppressed
	[ +13.597500] kauditd_printk_skb: 20 callbacks suppressed
	[Aug 5 23:10] kauditd_printk_skb: 45 callbacks suppressed
	
	
	==> etcd [17f0dc9ba8de] <==
	2024/08/05 23:08:27 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-05T23:08:27.902192Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.809078875s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/\" range_end:\"/registry/persistentvolumeclaims0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-05T23:08:27.902223Z","caller":"traceutil/trace.go:171","msg":"trace[1922081133] range","detail":"{range_begin:/registry/persistentvolumeclaims/; range_end:/registry/persistentvolumeclaims0; }","duration":"1.809122007s","start":"2024-08-05T23:08:26.093097Z","end":"2024-08-05T23:08:27.902219Z","steps":["trace[1922081133] 'agreement among raft nodes before linearized reading'  (duration: 1.80908929s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T23:08:27.902238Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-05T23:08:26.093092Z","time spent":"1.809141726s","remote":"127.0.0.1:33368","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":0,"response size":0,"request content":"key:\"/registry/persistentvolumeclaims/\" range_end:\"/registry/persistentvolumeclaims0\" count_only:true "}
	2024/08/05 23:08:27 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-05T23:08:27.938966Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T23:08:27.939045Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-05T23:08:27.939085Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-05T23:08:27.939508Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c5d16a8b28740de6"}
	{"level":"info","ts":"2024-08-05T23:08:27.939521Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c5d16a8b28740de6"}
	{"level":"info","ts":"2024-08-05T23:08:27.939534Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c5d16a8b28740de6"}
	{"level":"info","ts":"2024-08-05T23:08:27.939586Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c5d16a8b28740de6"}
	{"level":"info","ts":"2024-08-05T23:08:27.939611Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c5d16a8b28740de6"}
	{"level":"info","ts":"2024-08-05T23:08:27.939649Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c5d16a8b28740de6"}
	{"level":"info","ts":"2024-08-05T23:08:27.93966Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c5d16a8b28740de6"}
	{"level":"info","ts":"2024-08-05T23:08:27.939664Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"3cf0731ec44cd9cd"}
	{"level":"info","ts":"2024-08-05T23:08:27.939669Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3cf0731ec44cd9cd"}
	{"level":"info","ts":"2024-08-05T23:08:27.939682Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3cf0731ec44cd9cd"}
	{"level":"info","ts":"2024-08-05T23:08:27.939945Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3cf0731ec44cd9cd"}
	{"level":"info","ts":"2024-08-05T23:08:27.93999Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3cf0731ec44cd9cd"}
	{"level":"info","ts":"2024-08-05T23:08:27.940013Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3cf0731ec44cd9cd"}
	{"level":"info","ts":"2024-08-05T23:08:27.94002Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"3cf0731ec44cd9cd"}
	{"level":"info","ts":"2024-08-05T23:08:27.943994Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-05T23:08:27.944194Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-05T23:08:27.944204Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-968000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> etcd [5279a75fe775] <==
	{"level":"warn","ts":"2024-08-05T23:09:54.434503Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"3cf0731ec44cd9cd","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T23:09:54.434554Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"3cf0731ec44cd9cd","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T23:09:56.613745Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3cf0731ec44cd9cd","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T23:09:56.613841Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3cf0731ec44cd9cd","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T23:09:58.436371Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"3cf0731ec44cd9cd","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T23:09:58.43646Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"3cf0731ec44cd9cd","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T23:10:01.614459Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3cf0731ec44cd9cd","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T23:10:01.614643Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3cf0731ec44cd9cd","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T23:10:02.438102Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"3cf0731ec44cd9cd","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T23:10:02.438538Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"3cf0731ec44cd9cd","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T23:10:06.44078Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"3cf0731ec44cd9cd","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T23:10:06.441013Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"3cf0731ec44cd9cd","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T23:10:06.615747Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3cf0731ec44cd9cd","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T23:10:06.615762Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3cf0731ec44cd9cd","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T23:10:10.442851Z","caller":"etcdserver/cluster_util.go:294","msg":"failed to reach the peer URL","address":"https://192.169.0.7:2380/version","remote-member-id":"3cf0731ec44cd9cd","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T23:10:10.442937Z","caller":"etcdserver/cluster_util.go:158","msg":"failed to get version","remote-member-id":"3cf0731ec44cd9cd","error":"Get \"https://192.169.0.7:2380/version\": dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T23:10:11.615943Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"3cf0731ec44cd9cd","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-08-05T23:10:11.615957Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"3cf0731ec44cd9cd","rtt":"0s","error":"dial tcp 192.169.0.7:2380: connect: connection refused"}
	{"level":"info","ts":"2024-08-05T23:10:11.943048Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"3cf0731ec44cd9cd"}
	{"level":"info","ts":"2024-08-05T23:10:11.944056Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3cf0731ec44cd9cd"}
	{"level":"info","ts":"2024-08-05T23:10:11.946695Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3cf0731ec44cd9cd"}
	{"level":"info","ts":"2024-08-05T23:10:11.956038Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"3cf0731ec44cd9cd","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-05T23:10:11.956182Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3cf0731ec44cd9cd"}
	{"level":"info","ts":"2024-08-05T23:10:12.000882Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"3cf0731ec44cd9cd","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-05T23:10:12.001279Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3cf0731ec44cd9cd"}
	
	
	==> kernel <==
	 23:12:13 up 3 min,  0 users,  load average: 0.15, 0.13, 0.05
	Linux ha-968000 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [0193799bafd1] <==
	I0805 23:11:40.544766       1 main.go:322] Node ha-968000-m04 has CIDR [10.244.3.0/24] 
	I0805 23:11:50.543294       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0805 23:11:50.543389       1 main.go:322] Node ha-968000-m04 has CIDR [10.244.3.0/24] 
	I0805 23:11:50.543469       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0805 23:11:50.543559       1 main.go:299] handling current node
	I0805 23:11:50.543617       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0805 23:11:50.543662       1 main.go:322] Node ha-968000-m02 has CIDR [10.244.1.0/24] 
	I0805 23:11:50.543753       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0805 23:11:50.543809       1 main.go:322] Node ha-968000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:12:00.536321       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0805 23:12:00.536406       1 main.go:299] handling current node
	I0805 23:12:00.536431       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0805 23:12:00.536445       1 main.go:322] Node ha-968000-m02 has CIDR [10.244.1.0/24] 
	I0805 23:12:00.536565       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0805 23:12:00.536623       1 main.go:322] Node ha-968000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:12:00.536679       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0805 23:12:00.536753       1 main.go:322] Node ha-968000-m04 has CIDR [10.244.3.0/24] 
	I0805 23:12:10.543704       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0805 23:12:10.543779       1 main.go:299] handling current node
	I0805 23:12:10.543801       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0805 23:12:10.543846       1 main.go:322] Node ha-968000-m02 has CIDR [10.244.1.0/24] 
	I0805 23:12:10.543961       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0805 23:12:10.544007       1 main.go:322] Node ha-968000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:12:10.544128       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0805 23:12:10.544175       1 main.go:322] Node ha-968000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [0eff729c401d] <==
	I0805 23:07:48.343605       1 main.go:322] Node ha-968000-m04 has CIDR [10.244.3.0/24] 
	I0805 23:07:58.351602       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0805 23:07:58.351677       1 main.go:322] Node ha-968000-m04 has CIDR [10.244.3.0/24] 
	I0805 23:07:58.351762       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0805 23:07:58.351807       1 main.go:299] handling current node
	I0805 23:07:58.351827       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0805 23:07:58.351841       1 main.go:322] Node ha-968000-m02 has CIDR [10.244.1.0/24] 
	I0805 23:07:58.351891       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0805 23:07:58.351905       1 main.go:322] Node ha-968000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:08:08.348631       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0805 23:08:08.348818       1 main.go:299] handling current node
	I0805 23:08:08.348908       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0805 23:08:08.349012       1 main.go:322] Node ha-968000-m02 has CIDR [10.244.1.0/24] 
	I0805 23:08:08.349214       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0805 23:08:08.349308       1 main.go:322] Node ha-968000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:08:08.349413       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0805 23:08:08.349484       1 main.go:322] Node ha-968000-m04 has CIDR [10.244.3.0/24] 
	I0805 23:08:18.343861       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0805 23:08:18.343942       1 main.go:322] Node ha-968000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:08:18.344043       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0805 23:08:18.344162       1 main.go:322] Node ha-968000-m04 has CIDR [10.244.3.0/24] 
	I0805 23:08:18.344272       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0805 23:08:18.344339       1 main.go:299] handling current node
	I0805 23:08:18.344378       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0805 23:08:18.344489       1 main.go:322] Node ha-968000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [7aac4c03a731] <==
	W0805 23:08:28.934152       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.934242       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.934382       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.934673       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.934721       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.934770       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.934856       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.934936       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.934993       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.935038       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.934249       1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.935502       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.935594       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.935678       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.935758       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.934264       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.934278       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.935813       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.935856       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.935972       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.936008       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.936080       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.939206       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.939367       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.942086       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b60d19a54816] <==
	I0805 23:09:21.358366       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0805 23:09:21.350043       1 controller.go:116] Starting legacy_token_tracking_controller
	I0805 23:09:21.370875       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0805 23:09:21.450095       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0805 23:09:21.450212       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0805 23:09:21.450382       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0805 23:09:21.450418       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0805 23:09:21.451646       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0805 23:09:21.455635       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0805 23:09:21.456070       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0805 23:09:21.456339       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0805 23:09:21.456586       1 aggregator.go:165] initial CRD sync complete...
	I0805 23:09:21.456619       1 autoregister_controller.go:141] Starting autoregister controller
	I0805 23:09:21.456625       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0805 23:09:21.456632       1 cache.go:39] Caches are synced for autoregister controller
	I0805 23:09:21.469346       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0805 23:09:21.469695       1 policy_source.go:224] refreshing policies
	I0805 23:09:21.471034       1 shared_informer.go:320] Caches are synced for configmaps
	W0805 23:09:21.483470       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.6]
	I0805 23:09:21.485903       1 controller.go:615] quota admission added evaluator for: endpoints
	I0805 23:09:21.498200       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0805 23:09:21.502190       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0805 23:09:21.548021       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0805 23:09:22.355586       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0805 23:09:22.617731       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	
	
	==> kube-controller-manager [24b87a0c98dc] <==
	I0805 23:09:42.516020       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="53.899µs"
	I0805 23:09:42.553167       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.243331ms"
	I0805 23:09:42.553371       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="157.397µs"
	I0805 23:09:43.298079       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="40.023µs"
	I0805 23:09:44.331896       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="84.432µs"
	I0805 23:09:44.347761       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-m7pj6 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-m7pj6\": the object has been modified; please apply your changes to the latest version and try again"
	I0805 23:09:44.347990       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"e530d70f-5afe-4156-b878-dad9e9636f3d", APIVersion:"v1", ResourceVersion:"251", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-m7pj6 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-m7pj6": the object has been modified; please apply your changes to the latest version and try again
	I0805 23:09:44.391933       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.196µs"
	I0805 23:09:57.535315       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.167047ms"
	I0805 23:09:57.535386       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.188µs"
	I0805 23:10:00.575844       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="71.426µs"
	I0805 23:10:00.592030       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="7.881928ms"
	I0805 23:10:00.592262       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-m7pj6 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-m7pj6\": the object has been modified; please apply your changes to the latest version and try again"
	I0805 23:10:00.593124       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"e530d70f-5afe-4156-b878-dad9e9636f3d", APIVersion:"v1", ResourceVersion:"251", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-m7pj6 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-m7pj6": the object has been modified; please apply your changes to the latest version and try again
	I0805 23:10:00.593284       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="111.463µs"
	I0805 23:10:08.322324       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.921µs"
	I0805 23:10:11.182398       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.283834ms"
	I0805 23:10:11.184399       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="1.874184ms"
	I0805 23:10:13.827399       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.540772ms"
	I0805 23:10:13.827613       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="111.193µs"
	I0805 23:10:22.774890       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.13µs"
	I0805 23:10:22.794629       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-m7pj6 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-m7pj6\": the object has been modified; please apply your changes to the latest version and try again"
	I0805 23:10:22.796877       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"e530d70f-5afe-4156-b878-dad9e9636f3d", APIVersion:"v1", ResourceVersion:"251", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-m7pj6 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-m7pj6": the object has been modified; please apply your changes to the latest version and try again
	I0805 23:10:22.814531       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="27.799975ms"
	I0805 23:10:22.815039       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="388.332µs"
	
	
	==> kube-controller-manager [794441de3f19] <==
	I0805 23:06:10.953009       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="140.842137ms"
	I0805 23:06:10.970316       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.256226ms"
	I0805 23:06:10.981859       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.43482ms"
	I0805 23:06:10.982161       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.844µs"
	I0805 23:06:10.982675       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.081µs"
	I0805 23:06:10.983001       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.395µs"
	I0805 23:06:11.011942       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.408789ms"
	I0805 23:06:11.012096       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.128µs"
	I0805 23:06:11.149412       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.376µs"
	I0805 23:06:13.025618       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.346296ms"
	I0805 23:06:13.025933       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.389µs"
	I0805 23:06:13.649894       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.694881ms"
	I0805 23:06:13.650164       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.959µs"
	I0805 23:06:14.621712       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.914516ms"
	I0805 23:06:14.621776       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.079µs"
	E0805 23:06:38.510923       1 certificate_controller.go:146] Sync csr-tp8z2 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-tp8z2": the object has been modified; please apply your changes to the latest version and try again
	E0805 23:06:38.515110       1 certificate_controller.go:146] Sync csr-tp8z2 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-tp8z2": the object has been modified; please apply your changes to the latest version and try again
	I0805 23:06:38.609561       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-968000-m04\" does not exist"
	I0805 23:06:38.630882       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-968000-m04" podCIDRs=["10.244.3.0/24"]
	I0805 23:06:42.818552       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-968000-m04"
	I0805 23:07:01.374214       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-968000-m04"
	I0805 23:07:47.490230       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.437456ms"
	I0805 23:07:47.490444       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.713µs"
	I0805 23:07:50.255694       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.276166ms"
	I0805 23:07:50.255998       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="199.7µs"
	
	
	==> kube-proxy [236ffa329c7b] <==
	I0805 23:03:24.411171       1 server_linux.go:69] "Using iptables proxy"
	I0805 23:03:24.417641       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	I0805 23:03:24.460670       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 23:03:24.460733       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 23:03:24.460747       1 server_linux.go:165] "Using iptables Proxier"
	I0805 23:03:24.463438       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 23:03:24.463665       1 server.go:872] "Version info" version="v1.30.3"
	I0805 23:03:24.463697       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:03:24.464664       1 config.go:192] "Starting service config controller"
	I0805 23:03:24.464691       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 23:03:24.464706       1 config.go:101] "Starting endpoint slice config controller"
	I0805 23:03:24.464709       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 23:03:24.464932       1 config.go:319] "Starting node config controller"
	I0805 23:03:24.464937       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 23:03:24.564862       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0805 23:03:24.564952       1 shared_informer.go:320] Caches are synced for node config
	I0805 23:03:24.564967       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [3a4ca38aa00a] <==
	I0805 23:09:56.622719       1 server_linux.go:69] "Using iptables proxy"
	I0805 23:09:56.639713       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	I0805 23:09:56.685724       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 23:09:56.685766       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 23:09:56.685780       1 server_linux.go:165] "Using iptables Proxier"
	I0805 23:09:56.688954       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 23:09:56.689176       1 server.go:872] "Version info" version="v1.30.3"
	I0805 23:09:56.689205       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:09:56.691248       1 config.go:192] "Starting service config controller"
	I0805 23:09:56.691903       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 23:09:56.691947       1 config.go:101] "Starting endpoint slice config controller"
	I0805 23:09:56.691951       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 23:09:56.693386       1 config.go:319] "Starting node config controller"
	I0805 23:09:56.693414       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 23:09:56.792187       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0805 23:09:56.792473       1 shared_informer.go:320] Caches are synced for service config
	I0805 23:09:56.793440       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [66678698a7a8] <==
	E0805 23:03:07.355865       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0805 23:03:07.355526       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 23:03:07.356013       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 23:03:07.355515       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0805 23:03:07.356047       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0805 23:03:08.169414       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0805 23:03:08.169482       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0805 23:03:08.181890       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 23:03:08.181944       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0805 23:03:08.454577       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0805 23:03:08.454685       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0805 23:03:08.749738       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0805 23:06:10.780677       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-rmn5x\": pod busybox-fc5497c4f-rmn5x is already assigned to node \"ha-968000-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-rmn5x" node="ha-968000-m03"
	E0805 23:06:10.780742       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod da945b47-6ef2-4df0-8bf2-9ae079ae2d84(default/busybox-fc5497c4f-rmn5x) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-rmn5x"
	E0805 23:06:10.780758       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-rmn5x\": pod busybox-fc5497c4f-rmn5x is already assigned to node \"ha-968000-m03\"" pod="default/busybox-fc5497c4f-rmn5x"
	I0805 23:06:10.780855       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-rmn5x" node="ha-968000-m03"
	E0805 23:06:38.649780       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-qptt6\": pod kube-proxy-qptt6 is already assigned to node \"ha-968000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-qptt6" node="ha-968000-m04"
	E0805 23:06:38.649837       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod a826a636-1d05-4cca-a56d-d25a9cf41506(kube-system/kube-proxy-qptt6) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-qptt6"
	E0805 23:06:38.649849       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-qptt6\": pod kube-proxy-qptt6 is already assigned to node \"ha-968000-m04\"" pod="kube-system/kube-proxy-qptt6"
	I0805 23:06:38.649861       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-qptt6" node="ha-968000-m04"
	E0805 23:06:38.662121       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-5dshm\": pod kindnet-5dshm is already assigned to node \"ha-968000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-5dshm" node="ha-968000-m04"
	E0805 23:06:38.662175       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 2641d2a9-a26a-4cbe-b8ea-99ed7c7af43c(kube-system/kindnet-5dshm) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-5dshm"
	E0805 23:06:38.662188       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-5dshm\": pod kindnet-5dshm is already assigned to node \"ha-968000-m04\"" pod="kube-system/kindnet-5dshm"
	I0805 23:06:38.662201       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-5dshm" node="ha-968000-m04"
	E0805 23:08:27.797554       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d830712616b7] <==
	I0805 23:09:02.830792       1 serving.go:380] Generated self-signed cert in-memory
	W0805 23:09:13.096106       1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0805 23:09:13.096131       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0805 23:09:13.096136       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0805 23:09:21.391714       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0805 23:09:21.391874       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:09:21.403353       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0805 23:09:21.403400       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0805 23:09:21.403774       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0805 23:09:21.403911       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0805 23:09:21.503960       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 05 23:10:22 ha-968000 kubelet[1559]: I0805 23:10:22.306938    1559 scope.go:117] "RemoveContainer" containerID="08f1d5be6bd28b75f94b7738ff81a5faf3ea26cc077f91e542745e41a27fb9b1"
	Aug 05 23:10:26 ha-968000 kubelet[1559]: I0805 23:10:26.805536    1559 scope.go:117] "RemoveContainer" containerID="9b4a6fce5b3c1066d545503e22783e35c718132d1b3257df8921cf2bf1f2bc01"
	Aug 05 23:10:26 ha-968000 kubelet[1559]: I0805 23:10:26.805819    1559 scope.go:117] "RemoveContainer" containerID="cfccdb420519d323e32884587cbb2325493555960556f383b6b5243f23bf5672"
	Aug 05 23:10:26 ha-968000 kubelet[1559]: E0805 23:10:26.805954    1559 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(52e2952a-756d-4f65-84f5-588cb6563297)\"" pod="kube-system/storage-provisioner" podUID="52e2952a-756d-4f65-84f5-588cb6563297"
	Aug 05 23:10:41 ha-968000 kubelet[1559]: I0805 23:10:41.306378    1559 scope.go:117] "RemoveContainer" containerID="cfccdb420519d323e32884587cbb2325493555960556f383b6b5243f23bf5672"
	Aug 05 23:10:41 ha-968000 kubelet[1559]: E0805 23:10:41.306800    1559 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(52e2952a-756d-4f65-84f5-588cb6563297)\"" pod="kube-system/storage-provisioner" podUID="52e2952a-756d-4f65-84f5-588cb6563297"
	Aug 05 23:10:54 ha-968000 kubelet[1559]: E0805 23:10:54.327441    1559 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:10:54 ha-968000 kubelet[1559]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:10:54 ha-968000 kubelet[1559]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:10:54 ha-968000 kubelet[1559]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:10:54 ha-968000 kubelet[1559]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:10:56 ha-968000 kubelet[1559]: I0805 23:10:56.307136    1559 scope.go:117] "RemoveContainer" containerID="cfccdb420519d323e32884587cbb2325493555960556f383b6b5243f23bf5672"
	Aug 05 23:10:56 ha-968000 kubelet[1559]: E0805 23:10:56.307284    1559 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(52e2952a-756d-4f65-84f5-588cb6563297)\"" pod="kube-system/storage-provisioner" podUID="52e2952a-756d-4f65-84f5-588cb6563297"
	Aug 05 23:11:08 ha-968000 kubelet[1559]: I0805 23:11:08.307092    1559 scope.go:117] "RemoveContainer" containerID="cfccdb420519d323e32884587cbb2325493555960556f383b6b5243f23bf5672"
	Aug 05 23:11:08 ha-968000 kubelet[1559]: E0805 23:11:08.307242    1559 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(52e2952a-756d-4f65-84f5-588cb6563297)\"" pod="kube-system/storage-provisioner" podUID="52e2952a-756d-4f65-84f5-588cb6563297"
	Aug 05 23:11:20 ha-968000 kubelet[1559]: I0805 23:11:20.305826    1559 scope.go:117] "RemoveContainer" containerID="cfccdb420519d323e32884587cbb2325493555960556f383b6b5243f23bf5672"
	Aug 05 23:11:20 ha-968000 kubelet[1559]: E0805 23:11:20.305965    1559 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(52e2952a-756d-4f65-84f5-588cb6563297)\"" pod="kube-system/storage-provisioner" podUID="52e2952a-756d-4f65-84f5-588cb6563297"
	Aug 05 23:11:34 ha-968000 kubelet[1559]: I0805 23:11:34.306692    1559 scope.go:117] "RemoveContainer" containerID="cfccdb420519d323e32884587cbb2325493555960556f383b6b5243f23bf5672"
	Aug 05 23:11:34 ha-968000 kubelet[1559]: E0805 23:11:34.309100    1559 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(52e2952a-756d-4f65-84f5-588cb6563297)\"" pod="kube-system/storage-provisioner" podUID="52e2952a-756d-4f65-84f5-588cb6563297"
	Aug 05 23:11:48 ha-968000 kubelet[1559]: I0805 23:11:48.306459    1559 scope.go:117] "RemoveContainer" containerID="cfccdb420519d323e32884587cbb2325493555960556f383b6b5243f23bf5672"
	Aug 05 23:11:54 ha-968000 kubelet[1559]: E0805 23:11:54.321829    1559 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:11:54 ha-968000 kubelet[1559]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:11:54 ha-968000 kubelet[1559]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:11:54 ha-968000 kubelet[1559]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:11:54 ha-968000 kubelet[1559]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-968000 -n ha-968000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-968000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (246.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-darwin-amd64 -p ha-968000 node delete m03 -v=7 --alsologtostderr: (7.727138865s)
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-968000 status -v=7 --alsologtostderr: exit status 2 (340.859816ms)

                                                
                                                
-- stdout --
	ha-968000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-968000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-968000-m04
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:12:22.716886    4149 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:12:22.717100    4149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:12:22.717107    4149 out.go:304] Setting ErrFile to fd 2...
	I0805 16:12:22.717111    4149 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:12:22.717305    4149 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:12:22.717487    4149 out.go:298] Setting JSON to false
	I0805 16:12:22.717509    4149 mustload.go:65] Loading cluster: ha-968000
	I0805 16:12:22.717548    4149 notify.go:220] Checking for updates...
	I0805 16:12:22.717827    4149 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:12:22.717844    4149 status.go:255] checking status of ha-968000 ...
	I0805 16:12:22.718299    4149 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:12:22.718352    4149 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:12:22.727544    4149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52049
	I0805 16:12:22.727928    4149 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:12:22.728391    4149 main.go:141] libmachine: Using API Version  1
	I0805 16:12:22.728403    4149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:12:22.728604    4149 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:12:22.728712    4149 main.go:141] libmachine: (ha-968000) Calling .GetState
	I0805 16:12:22.728816    4149 main.go:141] libmachine: (ha-968000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:12:22.728921    4149 main.go:141] libmachine: (ha-968000) DBG | hyperkit pid from json: 4025
	I0805 16:12:22.729911    4149 status.go:330] ha-968000 host status = "Running" (err=<nil>)
	I0805 16:12:22.729928    4149 host.go:66] Checking if "ha-968000" exists ...
	I0805 16:12:22.730172    4149 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:12:22.730193    4149 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:12:22.738664    4149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52051
	I0805 16:12:22.738995    4149 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:12:22.739360    4149 main.go:141] libmachine: Using API Version  1
	I0805 16:12:22.739375    4149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:12:22.739598    4149 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:12:22.739707    4149 main.go:141] libmachine: (ha-968000) Calling .GetIP
	I0805 16:12:22.739791    4149 host.go:66] Checking if "ha-968000" exists ...
	I0805 16:12:22.740039    4149 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:12:22.740061    4149 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:12:22.748908    4149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52053
	I0805 16:12:22.749250    4149 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:12:22.749556    4149 main.go:141] libmachine: Using API Version  1
	I0805 16:12:22.749565    4149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:12:22.749751    4149 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:12:22.749860    4149 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:12:22.749991    4149 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:12:22.750011    4149 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:12:22.750114    4149 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:12:22.750189    4149 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:12:22.750272    4149 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:12:22.750346    4149 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/id_rsa Username:docker}
	I0805 16:12:22.787231    4149 ssh_runner.go:195] Run: systemctl --version
	I0805 16:12:22.791538    4149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:12:22.803712    4149 kubeconfig.go:125] found "ha-968000" server: "https://192.169.0.254:8443"
	I0805 16:12:22.803737    4149 api_server.go:166] Checking apiserver status ...
	I0805 16:12:22.803778    4149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:12:22.815967    4149 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2033/cgroup
	W0805 16:12:22.824036    4149 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2033/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 16:12:22.824083    4149 ssh_runner.go:195] Run: ls
	I0805 16:12:22.827280    4149 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0805 16:12:22.831308    4149 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0805 16:12:22.831323    4149 status.go:422] ha-968000 apiserver status = Running (err=<nil>)
	I0805 16:12:22.831332    4149 status.go:257] ha-968000 status: &{Name:ha-968000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:12:22.831352    4149 status.go:255] checking status of ha-968000-m02 ...
	I0805 16:12:22.831621    4149 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:12:22.831647    4149 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:12:22.840176    4149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52057
	I0805 16:12:22.840498    4149 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:12:22.840867    4149 main.go:141] libmachine: Using API Version  1
	I0805 16:12:22.840881    4149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:12:22.841066    4149 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:12:22.841163    4149 main.go:141] libmachine: (ha-968000-m02) Calling .GetState
	I0805 16:12:22.841247    4149 main.go:141] libmachine: (ha-968000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:12:22.841325    4149 main.go:141] libmachine: (ha-968000-m02) DBG | hyperkit pid from json: 4036
	I0805 16:12:22.842291    4149 status.go:330] ha-968000-m02 host status = "Running" (err=<nil>)
	I0805 16:12:22.842299    4149 host.go:66] Checking if "ha-968000-m02" exists ...
	I0805 16:12:22.842542    4149 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:12:22.842564    4149 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:12:22.851037    4149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52059
	I0805 16:12:22.851356    4149 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:12:22.851710    4149 main.go:141] libmachine: Using API Version  1
	I0805 16:12:22.851727    4149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:12:22.851926    4149 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:12:22.852032    4149 main.go:141] libmachine: (ha-968000-m02) Calling .GetIP
	I0805 16:12:22.852126    4149 host.go:66] Checking if "ha-968000-m02" exists ...
	I0805 16:12:22.852412    4149 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:12:22.852437    4149 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:12:22.860978    4149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52061
	I0805 16:12:22.861322    4149 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:12:22.861635    4149 main.go:141] libmachine: Using API Version  1
	I0805 16:12:22.861644    4149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:12:22.861856    4149 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:12:22.861961    4149 main.go:141] libmachine: (ha-968000-m02) Calling .DriverName
	I0805 16:12:22.862089    4149 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:12:22.862100    4149 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:12:22.862176    4149 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:12:22.862256    4149 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:12:22.862358    4149 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:12:22.862437    4149 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/id_rsa Username:docker}
	I0805 16:12:22.891743    4149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:12:22.902310    4149 kubeconfig.go:125] found "ha-968000" server: "https://192.169.0.254:8443"
	I0805 16:12:22.902324    4149 api_server.go:166] Checking apiserver status ...
	I0805 16:12:22.902364    4149 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:12:22.913740    4149 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2153/cgroup
	W0805 16:12:22.921073    4149 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2153/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 16:12:22.921119    4149 ssh_runner.go:195] Run: ls
	I0805 16:12:22.924243    4149 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0805 16:12:22.927527    4149 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0805 16:12:22.927540    4149 status.go:422] ha-968000-m02 apiserver status = Running (err=<nil>)
	I0805 16:12:22.927548    4149 status.go:257] ha-968000-m02 status: &{Name:ha-968000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:12:22.927566    4149 status.go:255] checking status of ha-968000-m04 ...
	I0805 16:12:22.927822    4149 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:12:22.927842    4149 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:12:22.936491    4149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52065
	I0805 16:12:22.936838    4149 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:12:22.937177    4149 main.go:141] libmachine: Using API Version  1
	I0805 16:12:22.937191    4149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:12:22.937408    4149 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:12:22.937520    4149 main.go:141] libmachine: (ha-968000-m04) Calling .GetState
	I0805 16:12:22.937606    4149 main.go:141] libmachine: (ha-968000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:12:22.937695    4149 main.go:141] libmachine: (ha-968000-m04) DBG | hyperkit pid from json: 4076
	I0805 16:12:22.938680    4149 status.go:330] ha-968000-m04 host status = "Running" (err=<nil>)
	I0805 16:12:22.938690    4149 host.go:66] Checking if "ha-968000-m04" exists ...
	I0805 16:12:22.939059    4149 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:12:22.939093    4149 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:12:22.947547    4149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52067
	I0805 16:12:22.947883    4149 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:12:22.948214    4149 main.go:141] libmachine: Using API Version  1
	I0805 16:12:22.948226    4149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:12:22.948444    4149 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:12:22.948552    4149 main.go:141] libmachine: (ha-968000-m04) Calling .GetIP
	I0805 16:12:22.948634    4149 host.go:66] Checking if "ha-968000-m04" exists ...
	I0805 16:12:22.948887    4149 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:12:22.948920    4149 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:12:22.957553    4149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52069
	I0805 16:12:22.957899    4149 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:12:22.958236    4149 main.go:141] libmachine: Using API Version  1
	I0805 16:12:22.958247    4149 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:12:22.958470    4149 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:12:22.958592    4149 main.go:141] libmachine: (ha-968000-m04) Calling .DriverName
	I0805 16:12:22.958727    4149 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:12:22.958740    4149 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:12:22.958832    4149 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:12:22.958949    4149 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:12:22.959056    4149 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:12:22.959158    4149 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/id_rsa Username:docker}
	I0805 16:12:22.992551    4149 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:12:23.003111    4149 status.go:257] ha-968000-m04 status: &{Name:ha-968000-m04 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-968000 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-968000 -n ha-968000
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-968000 logs -n 25: (3.412591836s)
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-968000 ssh -n                                                                                                             | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | ha-968000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-968000 ssh -n ha-968000-m02 sudo cat                                                                                      | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | /home/docker/cp-test_ha-968000-m03_ha-968000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-968000 cp ha-968000-m03:/home/docker/cp-test.txt                                                                          | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | ha-968000-m04:/home/docker/cp-test_ha-968000-m03_ha-968000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-968000 ssh -n                                                                                                             | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | ha-968000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-968000 ssh -n ha-968000-m04 sudo cat                                                                                      | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | /home/docker/cp-test_ha-968000-m03_ha-968000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-968000 cp testdata/cp-test.txt                                                                                            | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | ha-968000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-968000 ssh -n                                                                                                             | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | ha-968000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-968000 cp ha-968000-m04:/home/docker/cp-test.txt                                                                          | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1635686668/001/cp-test_ha-968000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-968000 ssh -n                                                                                                             | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | ha-968000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-968000 cp ha-968000-m04:/home/docker/cp-test.txt                                                                          | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | ha-968000:/home/docker/cp-test_ha-968000-m04_ha-968000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-968000 ssh -n                                                                                                             | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | ha-968000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-968000 ssh -n ha-968000 sudo cat                                                                                          | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | /home/docker/cp-test_ha-968000-m04_ha-968000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-968000 cp ha-968000-m04:/home/docker/cp-test.txt                                                                          | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | ha-968000-m02:/home/docker/cp-test_ha-968000-m04_ha-968000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-968000 ssh -n                                                                                                             | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | ha-968000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-968000 ssh -n ha-968000-m02 sudo cat                                                                                      | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | /home/docker/cp-test_ha-968000-m04_ha-968000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-968000 cp ha-968000-m04:/home/docker/cp-test.txt                                                                          | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | ha-968000-m03:/home/docker/cp-test_ha-968000-m04_ha-968000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-968000 ssh -n                                                                                                             | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | ha-968000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-968000 ssh -n ha-968000-m03 sudo cat                                                                                      | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | /home/docker/cp-test_ha-968000-m04_ha-968000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-968000 node stop m02 -v=7                                                                                                 | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:07 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-968000 node start m02 -v=7                                                                                                | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:07 PDT | 05 Aug 24 16:08 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-968000 -v=7                                                                                                       | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:08 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-968000 -v=7                                                                                                            | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:08 PDT | 05 Aug 24 16:08 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-968000 --wait=true -v=7                                                                                                | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:08 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-968000                                                                                                            | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:12 PDT |                     |
	| node    | ha-968000 node delete m03 -v=7                                                                                               | ha-968000 | jenkins | v1.33.1 | 05 Aug 24 16:12 PDT | 05 Aug 24 16:12 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 16:08:35
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 16:08:35.679541    4013 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:08:35.680318    4013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:08:35.680328    4013 out.go:304] Setting ErrFile to fd 2...
	I0805 16:08:35.680346    4013 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:08:35.680972    4013 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:08:35.682707    4013 out.go:298] Setting JSON to false
	I0805 16:08:35.706964    4013 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2286,"bootTime":1722897029,"procs":430,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0805 16:08:35.707087    4013 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:08:35.728606    4013 out.go:177] * [ha-968000] minikube v1.33.1 on Darwin 14.5
	I0805 16:08:35.770605    4013 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:08:35.770660    4013 notify.go:220] Checking for updates...
	I0805 16:08:35.813604    4013 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:08:35.834532    4013 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0805 16:08:35.855464    4013 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:08:35.876389    4013 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:08:35.897688    4013 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:08:35.919248    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:08:35.919436    4013 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:08:35.920085    4013 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:08:35.920151    4013 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:08:35.929520    4013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51884
	I0805 16:08:35.929878    4013 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:08:35.930279    4013 main.go:141] libmachine: Using API Version  1
	I0805 16:08:35.930302    4013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:08:35.930554    4013 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:08:35.930686    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:08:35.959618    4013 out.go:177] * Using the hyperkit driver based on existing profile
	I0805 16:08:36.001252    4013 start.go:297] selected driver: hyperkit
	I0805 16:08:36.001281    4013 start.go:901] validating driver "hyperkit" against &{Name:ha-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:ha-968000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:fals
e efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:08:36.001519    4013 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:08:36.001702    4013 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:08:36.001927    4013 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19373-1122/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0805 16:08:36.011596    4013 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0805 16:08:36.017027    4013 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:08:36.017051    4013 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0805 16:08:36.020140    4013 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:08:36.020202    4013 cni.go:84] Creating CNI manager for ""
	I0805 16:08:36.020212    4013 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0805 16:08:36.020294    4013 start.go:340] cluster config:
	{Name:ha-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-968000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-t
iller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:08:36.020400    4013 iso.go:125] acquiring lock: {Name:mk71e8d40232ece83c91dc82184f03ab93aee56e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:08:36.062580    4013 out.go:177] * Starting "ha-968000" primary control-plane node in "ha-968000" cluster
	I0805 16:08:36.085413    4013 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:08:36.085486    4013 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0805 16:08:36.085505    4013 cache.go:56] Caching tarball of preloaded images
	I0805 16:08:36.085698    4013 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:08:36.085718    4013 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:08:36.085921    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:08:36.086796    4013 start.go:360] acquireMachinesLock for ha-968000: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:08:36.086915    4013 start.go:364] duration metric: took 94.676µs to acquireMachinesLock for "ha-968000"
	I0805 16:08:36.086955    4013 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:08:36.086972    4013 fix.go:54] fixHost starting: 
	I0805 16:08:36.087391    4013 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:08:36.087423    4013 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:08:36.096218    4013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51886
	I0805 16:08:36.096566    4013 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:08:36.096926    4013 main.go:141] libmachine: Using API Version  1
	I0805 16:08:36.096939    4013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:08:36.097199    4013 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:08:36.097327    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:08:36.097443    4013 main.go:141] libmachine: (ha-968000) Calling .GetState
	I0805 16:08:36.097545    4013 main.go:141] libmachine: (ha-968000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:08:36.097604    4013 main.go:141] libmachine: (ha-968000) DBG | hyperkit pid from json: 3418
	I0805 16:08:36.098523    4013 main.go:141] libmachine: (ha-968000) DBG | hyperkit pid 3418 missing from process table
	I0805 16:08:36.098563    4013 fix.go:112] recreateIfNeeded on ha-968000: state=Stopped err=<nil>
	I0805 16:08:36.098579    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	W0805 16:08:36.098669    4013 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:08:36.140439    4013 out.go:177] * Restarting existing hyperkit VM for "ha-968000" ...
	I0805 16:08:36.161262    4013 main.go:141] libmachine: (ha-968000) Calling .Start
	I0805 16:08:36.161541    4013 main.go:141] libmachine: (ha-968000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:08:36.161569    4013 main.go:141] libmachine: (ha-968000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/hyperkit.pid
	I0805 16:08:36.163159    4013 main.go:141] libmachine: (ha-968000) DBG | hyperkit pid 3418 missing from process table
	I0805 16:08:36.163172    4013 main.go:141] libmachine: (ha-968000) DBG | pid 3418 is in state "Stopped"
	I0805 16:08:36.163189    4013 main.go:141] libmachine: (ha-968000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/hyperkit.pid...
	I0805 16:08:36.163382    4013 main.go:141] libmachine: (ha-968000) DBG | Using UUID a9f347e2-e9fc-4e4f-b87b-350754bafb6d
	I0805 16:08:36.294197    4013 main.go:141] libmachine: (ha-968000) DBG | Generated MAC 3e:79:a8:cb:37:4b
	I0805 16:08:36.294223    4013 main.go:141] libmachine: (ha-968000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000
	I0805 16:08:36.294340    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a9f347e2-e9fc-4e4f-b87b-350754bafb6d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c4780)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:08:36.294368    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a9f347e2-e9fc-4e4f-b87b-350754bafb6d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c4780)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:08:36.294409    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "a9f347e2-e9fc-4e4f-b87b-350754bafb6d", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/ha-968000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000"}
	I0805 16:08:36.294446    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U a9f347e2-e9fc-4e4f-b87b-350754bafb6d -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/ha-968000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000"
	I0805 16:08:36.294464    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:08:36.295966    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 DEBUG: hyperkit: Pid is 4025
	I0805 16:08:36.296384    4013 main.go:141] libmachine: (ha-968000) DBG | Attempt 0
	I0805 16:08:36.296402    4013 main.go:141] libmachine: (ha-968000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:08:36.296476    4013 main.go:141] libmachine: (ha-968000) DBG | hyperkit pid from json: 4025
	I0805 16:08:36.298241    4013 main.go:141] libmachine: (ha-968000) DBG | Searching for 3e:79:a8:cb:37:4b in /var/db/dhcpd_leases ...
	I0805 16:08:36.298320    4013 main.go:141] libmachine: (ha-968000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0805 16:08:36.298334    4013 main.go:141] libmachine: (ha-968000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b15b5a}
	I0805 16:08:36.298341    4013 main.go:141] libmachine: (ha-968000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2acb6}
	I0805 16:08:36.298352    4013 main.go:141] libmachine: (ha-968000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b2ac1c}
	I0805 16:08:36.298378    4013 main.go:141] libmachine: (ha-968000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2ab94}
	I0805 16:08:36.298390    4013 main.go:141] libmachine: (ha-968000) DBG | Found match: 3e:79:a8:cb:37:4b
	I0805 16:08:36.298400    4013 main.go:141] libmachine: (ha-968000) DBG | IP: 192.169.0.5
	I0805 16:08:36.298431    4013 main.go:141] libmachine: (ha-968000) Calling .GetConfigRaw
	I0805 16:08:36.299288    4013 main.go:141] libmachine: (ha-968000) Calling .GetIP
	I0805 16:08:36.299496    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:08:36.299907    4013 machine.go:94] provisionDockerMachine start ...
	I0805 16:08:36.299917    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:08:36.300052    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:36.300161    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:36.300278    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:36.300399    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:36.300504    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:36.300629    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:08:36.300879    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0805 16:08:36.300887    4013 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 16:08:36.304094    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:08:36.358116    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:08:36.358849    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:08:36.358861    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:08:36.358871    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:08:36.358879    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:08:36.744699    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:08:36.744726    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:08:36.859121    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:08:36.859139    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:08:36.859155    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:08:36.859188    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:08:36.860075    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:08:36.860087    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:36 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:08:42.442082    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:42 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:08:42.442122    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:42 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:08:42.442133    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:42 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:08:42.468515    4013 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:08:42 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:08:47.381320    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 16:08:47.381334    4013 main.go:141] libmachine: (ha-968000) Calling .GetMachineName
	I0805 16:08:47.381494    4013 buildroot.go:166] provisioning hostname "ha-968000"
	I0805 16:08:47.381505    4013 main.go:141] libmachine: (ha-968000) Calling .GetMachineName
	I0805 16:08:47.381614    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:47.381731    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:47.381824    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.381916    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.382009    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:47.382131    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:08:47.382292    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0805 16:08:47.382300    4013 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-968000 && echo "ha-968000" | sudo tee /etc/hostname
	I0805 16:08:47.461361    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-968000
	
	I0805 16:08:47.461391    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:47.461523    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:47.461610    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.461697    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.461801    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:47.461927    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:08:47.462076    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0805 16:08:47.462087    4013 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-968000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-968000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-968000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:08:47.534682    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:08:47.534701    4013 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:08:47.534713    4013 buildroot.go:174] setting up certificates
	I0805 16:08:47.534720    4013 provision.go:84] configureAuth start
	I0805 16:08:47.534727    4013 main.go:141] libmachine: (ha-968000) Calling .GetMachineName
	I0805 16:08:47.534861    4013 main.go:141] libmachine: (ha-968000) Calling .GetIP
	I0805 16:08:47.534954    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:47.535056    4013 provision.go:143] copyHostCerts
	I0805 16:08:47.535084    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:08:47.535151    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:08:47.535160    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:08:47.535302    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:08:47.535496    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:08:47.535537    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:08:47.535561    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:08:47.535642    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:08:47.535782    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:08:47.535820    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:08:47.535825    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:08:47.535901    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:08:47.536041    4013 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.ha-968000 san=[127.0.0.1 192.169.0.5 ha-968000 localhost minikube]
	I0805 16:08:47.710785    4013 provision.go:177] copyRemoteCerts
	I0805 16:08:47.710840    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:08:47.710858    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:47.710996    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:47.711136    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.711274    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:47.711374    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/id_rsa Username:docker}
	I0805 16:08:47.750129    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:08:47.750206    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:08:47.771089    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:08:47.771160    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0805 16:08:47.789876    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:08:47.789938    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:08:47.809484    4013 provision.go:87] duration metric: took 274.74692ms to configureAuth
	I0805 16:08:47.809497    4013 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:08:47.809670    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:08:47.809683    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:08:47.809829    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:47.809915    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:47.810002    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.810076    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.810154    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:47.810265    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:08:47.810397    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0805 16:08:47.810405    4013 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:08:47.878284    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:08:47.878296    4013 buildroot.go:70] root file system type: tmpfs
	I0805 16:08:47.878387    4013 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:08:47.878399    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:47.878536    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:47.878623    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.878711    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.878808    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:47.878940    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:08:47.879074    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0805 16:08:47.879122    4013 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:08:47.957253    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:08:47.957278    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:47.957421    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:47.957524    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.957614    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:47.957714    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:47.957844    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:08:47.957985    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0805 16:08:47.957996    4013 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:08:49.653715    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:08:49.653732    4013 machine.go:97] duration metric: took 13.353812952s to provisionDockerMachine
	I0805 16:08:49.653746    4013 start.go:293] postStartSetup for "ha-968000" (driver="hyperkit")
	I0805 16:08:49.653760    4013 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:08:49.653771    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:08:49.653973    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:08:49.653990    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:49.654090    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:49.654219    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:49.654313    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:49.654396    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/id_rsa Username:docker}
	I0805 16:08:49.695524    4013 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:08:49.698720    4013 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:08:49.698734    4013 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:08:49.698825    4013 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:08:49.699014    4013 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:08:49.699020    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:08:49.699239    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:08:49.707453    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:08:49.726493    4013 start.go:296] duration metric: took 72.739242ms for postStartSetup
	I0805 16:08:49.726518    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:08:49.726678    4013 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0805 16:08:49.726689    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:49.726778    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:49.726859    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:49.726953    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:49.727030    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/id_rsa Username:docker}
	I0805 16:08:49.773612    4013 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0805 16:08:49.773669    4013 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0805 16:08:49.839587    4013 fix.go:56] duration metric: took 13.752613014s for fixHost
	I0805 16:08:49.839610    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:49.839781    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:49.839886    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:49.839982    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:49.840087    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:49.840208    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:08:49.840351    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0805 16:08:49.840358    4013 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 16:08:49.909831    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722899330.049194417
	
	I0805 16:08:49.909843    4013 fix.go:216] guest clock: 1722899330.049194417
	I0805 16:08:49.909849    4013 fix.go:229] Guest: 2024-08-05 16:08:50.049194417 -0700 PDT Remote: 2024-08-05 16:08:49.8396 -0700 PDT m=+14.197025337 (delta=209.594417ms)
	I0805 16:08:49.909866    4013 fix.go:200] guest clock delta is within tolerance: 209.594417ms
	I0805 16:08:49.909870    4013 start.go:83] releasing machines lock for "ha-968000", held for 13.822941144s
	I0805 16:08:49.909890    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:08:49.910020    4013 main.go:141] libmachine: (ha-968000) Calling .GetIP
	I0805 16:08:49.910132    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:08:49.910474    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:08:49.910586    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:08:49.910664    4013 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:08:49.910695    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:49.910746    4013 ssh_runner.go:195] Run: cat /version.json
	I0805 16:08:49.910757    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:08:49.910786    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:49.910854    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:08:49.910893    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:49.910967    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:08:49.910992    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:49.911086    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:08:49.911105    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/id_rsa Username:docker}
	I0805 16:08:49.911177    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/id_rsa Username:docker}
	I0805 16:08:49.948334    4013 ssh_runner.go:195] Run: systemctl --version
	I0805 16:08:49.997557    4013 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 16:08:50.001927    4013 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:08:50.001971    4013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:08:50.014441    4013 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:08:50.014455    4013 start.go:495] detecting cgroup driver to use...
	I0805 16:08:50.014568    4013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:08:50.030880    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:08:50.040000    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:08:50.048917    4013 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:08:50.048956    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:08:50.058052    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:08:50.067040    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:08:50.075877    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:08:50.084739    4013 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:08:50.093910    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:08:50.102684    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:08:50.111468    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:08:50.120485    4013 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:08:50.128670    4013 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:08:50.136701    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:08:50.239872    4013 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:08:50.259056    4013 start.go:495] detecting cgroup driver to use...
	I0805 16:08:50.259134    4013 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:08:50.276716    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:08:50.288092    4013 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:08:50.305475    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:08:50.315851    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:08:50.325889    4013 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:08:50.345027    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:08:50.355226    4013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:08:50.370181    4013 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:08:50.373242    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:08:50.380619    4013 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:08:50.394005    4013 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:08:50.490673    4013 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:08:50.595291    4013 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:08:50.595364    4013 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:08:50.609503    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:08:50.704344    4013 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:08:53.027644    4013 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.323281261s)
	I0805 16:08:53.027701    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0805 16:08:53.038843    4013 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0805 16:08:53.053238    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:08:53.063556    4013 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0805 16:08:53.166406    4013 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0805 16:08:53.281072    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:08:53.386855    4013 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0805 16:08:53.400726    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:08:53.412004    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:08:53.527406    4013 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0805 16:08:53.592203    4013 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0805 16:08:53.592286    4013 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0805 16:08:53.596745    4013 start.go:563] Will wait 60s for crictl version
	I0805 16:08:53.596797    4013 ssh_runner.go:195] Run: which crictl
	I0805 16:08:53.600648    4013 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 16:08:53.626561    4013 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0805 16:08:53.626630    4013 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:08:53.645043    4013 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:08:53.705589    4013 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0805 16:08:53.705632    4013 main.go:141] libmachine: (ha-968000) Calling .GetIP
	I0805 16:08:53.705996    4013 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0805 16:08:53.710588    4013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:08:53.720355    4013 kubeadm.go:883] updating cluster {Name:ha-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:ha-968000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false f
reshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 16:08:53.720443    4013 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:08:53.720494    4013 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:08:53.733778    4013 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240730-75a5af0c
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0805 16:08:53.733792    4013 docker.go:615] Images already preloaded, skipping extraction
	I0805 16:08:53.733871    4013 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:08:53.750560    4013 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240730-75a5af0c
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0805 16:08:53.750581    4013 cache_images.go:84] Images are preloaded, skipping loading
	I0805 16:08:53.750593    4013 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.30.3 docker true true} ...
	I0805 16:08:53.750678    4013 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-968000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-968000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 16:08:53.750747    4013 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0805 16:08:53.787873    4013 cni.go:84] Creating CNI manager for ""
	I0805 16:08:53.787890    4013 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0805 16:08:53.787901    4013 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 16:08:53.787917    4013 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-968000 NodeName:ha-968000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 16:08:53.787998    4013 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-968000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 16:08:53.788013    4013 kube-vip.go:115] generating kube-vip config ...
	I0805 16:08:53.788070    4013 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0805 16:08:53.800656    4013 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0805 16:08:53.800732    4013 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0805 16:08:53.800782    4013 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 16:08:53.809476    4013 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 16:08:53.809517    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0805 16:08:53.816818    4013 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0805 16:08:53.830799    4013 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 16:08:53.844236    4013 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0805 16:08:53.858097    4013 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0805 16:08:53.871426    4013 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0805 16:08:53.874277    4013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:08:53.883655    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:08:53.988496    4013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:08:54.003102    4013 certs.go:68] Setting up /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000 for IP: 192.169.0.5
	I0805 16:08:54.003116    4013 certs.go:194] generating shared ca certs ...
	I0805 16:08:54.003129    4013 certs.go:226] acquiring lock for ca certs: {Name:mkb83e058d89c7d4e66f4136f377a3c305b13735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:08:54.003311    4013 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key
	I0805 16:08:54.003384    4013 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key
	I0805 16:08:54.003396    4013 certs.go:256] generating profile certs ...
	I0805 16:08:54.003511    4013 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/client.key
	I0805 16:08:54.003533    4013 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key.e79882c6
	I0805 16:08:54.003547    4013 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt.e79882c6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.7 192.169.0.254]
	I0805 16:08:54.115170    4013 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt.e79882c6 ...
	I0805 16:08:54.115186    4013 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt.e79882c6: {Name:mk08e7d67872e7bcbb9c4a5ebb3c1a0585610c24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:08:54.115545    4013 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key.e79882c6 ...
	I0805 16:08:54.115555    4013 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key.e79882c6: {Name:mk05314b1c47ab3f7e3ebdc93ec7e7e8886a1b84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:08:54.115785    4013 certs.go:381] copying /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt.e79882c6 -> /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt
	I0805 16:08:54.116009    4013 certs.go:385] copying /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key.e79882c6 -> /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key
	I0805 16:08:54.116270    4013 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.key
	I0805 16:08:54.116285    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 16:08:54.116311    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 16:08:54.116333    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 16:08:54.116355    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 16:08:54.116375    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 16:08:54.116396    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 16:08:54.116416    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 16:08:54.116436    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 16:08:54.116538    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem (1338 bytes)
	W0805 16:08:54.116595    4013 certs.go:480] ignoring /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678_empty.pem, impossibly tiny 0 bytes
	I0805 16:08:54.116605    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 16:08:54.116642    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem (1082 bytes)
	I0805 16:08:54.116678    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem (1123 bytes)
	I0805 16:08:54.116714    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem (1675 bytes)
	I0805 16:08:54.116792    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:08:54.116828    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem -> /usr/share/ca-certificates/1678.pem
	I0805 16:08:54.116855    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /usr/share/ca-certificates/16782.pem
	I0805 16:08:54.116877    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:08:54.117335    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 16:08:54.150739    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 16:08:54.186504    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 16:08:54.226561    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0805 16:08:54.269928    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0805 16:08:54.303048    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 16:08:54.323374    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 16:08:54.342974    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 16:08:54.363396    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem --> /usr/share/ca-certificates/1678.pem (1338 bytes)
	I0805 16:08:54.383241    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /usr/share/ca-certificates/16782.pem (1708 bytes)
	I0805 16:08:54.402950    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 16:08:54.422603    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 16:08:54.436211    4013 ssh_runner.go:195] Run: openssl version
	I0805 16:08:54.440410    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1678.pem && ln -fs /usr/share/ca-certificates/1678.pem /etc/ssl/certs/1678.pem"
	I0805 16:08:54.448686    4013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1678.pem
	I0805 16:08:54.452045    4013 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 22:58 /usr/share/ca-certificates/1678.pem
	I0805 16:08:54.452085    4013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1678.pem
	I0805 16:08:54.456273    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1678.pem /etc/ssl/certs/51391683.0"
	I0805 16:08:54.464533    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16782.pem && ln -fs /usr/share/ca-certificates/16782.pem /etc/ssl/certs/16782.pem"
	I0805 16:08:54.472739    4013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16782.pem
	I0805 16:08:54.476114    4013 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 22:58 /usr/share/ca-certificates/16782.pem
	I0805 16:08:54.476150    4013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16782.pem
	I0805 16:08:54.480401    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16782.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 16:08:54.488643    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 16:08:54.496792    4013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:08:54.500141    4013 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:08:54.500183    4013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:08:54.504411    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 16:08:54.512563    4013 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 16:08:54.516172    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 16:08:54.520959    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 16:08:54.525326    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 16:08:54.530085    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 16:08:54.534367    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 16:08:54.538835    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 16:08:54.543179    4013 kubeadm.go:392] StartCluster: {Name:ha-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:ha-968000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fres
hpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:08:54.543300    4013 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 16:08:54.556340    4013 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 16:08:54.563823    4013 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 16:08:54.563834    4013 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 16:08:54.563876    4013 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 16:08:54.571534    4013 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 16:08:54.571871    4013 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-968000" does not appear in /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:08:54.571963    4013 kubeconfig.go:62] /Users/jenkins/minikube-integration/19373-1122/kubeconfig needs updating (will repair): [kubeconfig missing "ha-968000" cluster setting kubeconfig missing "ha-968000" context setting]
	I0805 16:08:54.572632    4013 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/kubeconfig: {Name:mk2a0d8b4d330b3c26432fc65d015ddf98a9cc93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:08:54.573442    4013 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:08:54.573629    4013 kapi.go:59] client config for ha-968000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x85c5060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:08:54.573946    4013 cert_rotation.go:137] Starting client certificate rotation controller
	I0805 16:08:54.574116    4013 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 16:08:54.581700    4013 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0805 16:08:54.581717    4013 kubeadm.go:597] duration metric: took 17.878919ms to restartPrimaryControlPlane
	I0805 16:08:54.581733    4013 kubeadm.go:394] duration metric: took 38.554869ms to StartCluster
	I0805 16:08:54.581748    4013 settings.go:142] acquiring lock: {Name:mk564a817a54ecf2aef16a4d2309e85208c0231f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:08:54.581853    4013 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:08:54.582215    4013 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/kubeconfig: {Name:mk2a0d8b4d330b3c26432fc65d015ddf98a9cc93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:08:54.582428    4013 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:08:54.582441    4013 start.go:241] waiting for startup goroutines ...
	I0805 16:08:54.582452    4013 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 16:08:54.582577    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:08:54.626035    4013 out.go:177] * Enabled addons: 
	I0805 16:08:54.646951    4013 addons.go:510] duration metric: took 64.498286ms for enable addons: enabled=[]
	I0805 16:08:54.646991    4013 start.go:246] waiting for cluster config update ...
	I0805 16:08:54.647007    4013 start.go:255] writing updated cluster config ...
	I0805 16:08:54.669067    4013 out.go:177] 
	I0805 16:08:54.690499    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:08:54.690643    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:08:54.713097    4013 out.go:177] * Starting "ha-968000-m02" control-plane node in "ha-968000" cluster
	I0805 16:08:54.754948    4013 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:08:54.755014    4013 cache.go:56] Caching tarball of preloaded images
	I0805 16:08:54.755180    4013 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:08:54.755198    4013 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:08:54.755327    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:08:54.756294    4013 start.go:360] acquireMachinesLock for ha-968000-m02: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:08:54.756399    4013 start.go:364] duration metric: took 80.734µs to acquireMachinesLock for "ha-968000-m02"
	I0805 16:08:54.756425    4013 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:08:54.756433    4013 fix.go:54] fixHost starting: m02
	I0805 16:08:54.756872    4013 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:08:54.756903    4013 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:08:54.766304    4013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51908
	I0805 16:08:54.766655    4013 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:08:54.766978    4013 main.go:141] libmachine: Using API Version  1
	I0805 16:08:54.766996    4013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:08:54.767193    4013 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:08:54.767300    4013 main.go:141] libmachine: (ha-968000-m02) Calling .DriverName
	I0805 16:08:54.767383    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetState
	I0805 16:08:54.767464    4013 main.go:141] libmachine: (ha-968000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:08:54.767541    4013 main.go:141] libmachine: (ha-968000-m02) DBG | hyperkit pid from json: 3958
	I0805 16:08:54.768456    4013 main.go:141] libmachine: (ha-968000-m02) DBG | hyperkit pid 3958 missing from process table
	I0805 16:08:54.768475    4013 fix.go:112] recreateIfNeeded on ha-968000-m02: state=Stopped err=<nil>
	I0805 16:08:54.768483    4013 main.go:141] libmachine: (ha-968000-m02) Calling .DriverName
	W0805 16:08:54.768562    4013 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:08:54.811088    4013 out.go:177] * Restarting existing hyperkit VM for "ha-968000-m02" ...
	I0805 16:08:54.832129    4013 main.go:141] libmachine: (ha-968000-m02) Calling .Start
	I0805 16:08:54.832449    4013 main.go:141] libmachine: (ha-968000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:08:54.832594    4013 main.go:141] libmachine: (ha-968000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/hyperkit.pid
	I0805 16:08:54.834273    4013 main.go:141] libmachine: (ha-968000-m02) DBG | hyperkit pid 3958 missing from process table
	I0805 16:08:54.834290    4013 main.go:141] libmachine: (ha-968000-m02) DBG | pid 3958 is in state "Stopped"
	I0805 16:08:54.834314    4013 main.go:141] libmachine: (ha-968000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/hyperkit.pid...
	I0805 16:08:54.834555    4013 main.go:141] libmachine: (ha-968000-m02) DBG | Using UUID fe2b7178-e807-4f71-b597-390ca402ab71
	I0805 16:08:54.862624    4013 main.go:141] libmachine: (ha-968000-m02) DBG | Generated MAC b2:64:5d:40:b:b5
	I0805 16:08:54.862655    4013 main.go:141] libmachine: (ha-968000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000
	I0805 16:08:54.862830    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"fe2b7178-e807-4f71-b597-390ca402ab71", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aaa20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:08:54.862873    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"fe2b7178-e807-4f71-b597-390ca402ab71", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003aaa20)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:08:54.862907    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "fe2b7178-e807-4f71-b597-390ca402ab71", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/ha-968000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machine
s/ha-968000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000"}
	I0805 16:08:54.862951    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U fe2b7178-e807-4f71-b597-390ca402ab71 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/ha-968000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000"
	I0805 16:08:54.862972    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:08:54.864230    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 DEBUG: hyperkit: Pid is 4036
	I0805 16:08:54.864617    4013 main.go:141] libmachine: (ha-968000-m02) DBG | Attempt 0
	I0805 16:08:54.864628    4013 main.go:141] libmachine: (ha-968000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:08:54.864712    4013 main.go:141] libmachine: (ha-968000-m02) DBG | hyperkit pid from json: 4036
	I0805 16:08:54.866673    4013 main.go:141] libmachine: (ha-968000-m02) DBG | Searching for b2:64:5d:40:b:b5 in /var/db/dhcpd_leases ...
	I0805 16:08:54.866730    4013 main.go:141] libmachine: (ha-968000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0805 16:08:54.866746    4013 main.go:141] libmachine: (ha-968000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2acfd}
	I0805 16:08:54.866756    4013 main.go:141] libmachine: (ha-968000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b15b5a}
	I0805 16:08:54.866763    4013 main.go:141] libmachine: (ha-968000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2acb6}
	I0805 16:08:54.866779    4013 main.go:141] libmachine: (ha-968000-m02) DBG | Found match: b2:64:5d:40:b:b5
	I0805 16:08:54.866785    4013 main.go:141] libmachine: (ha-968000-m02) DBG | IP: 192.169.0.6
	I0805 16:08:54.866826    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetConfigRaw
	I0805 16:08:54.867497    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetIP
	I0805 16:08:54.867687    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:08:54.868091    4013 machine.go:94] provisionDockerMachine start ...
	I0805 16:08:54.868103    4013 main.go:141] libmachine: (ha-968000-m02) Calling .DriverName
	I0805 16:08:54.868265    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:08:54.868366    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:08:54.868470    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:08:54.868561    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:08:54.868654    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:08:54.868809    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:08:54.868963    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0805 16:08:54.868973    4013 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 16:08:54.872068    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:08:54.880205    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:08:54.881201    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:08:54.881214    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:08:54.881243    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:08:54.881257    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:08:55.265892    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:55 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:08:55.265907    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:55 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:08:55.380667    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:08:55.380687    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:08:55.380695    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:08:55.380701    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:55 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:08:55.381533    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:55 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:08:55.381546    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:08:55 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:09:00.973735    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:09:00 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:09:00.973856    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:09:00 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:09:00.973866    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:09:00 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:09:00.997819    4013 main.go:141] libmachine: (ha-968000-m02) DBG | 2024/08/05 16:09:00 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:09:05.931816    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 16:09:05.931831    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetMachineName
	I0805 16:09:05.931997    4013 buildroot.go:166] provisioning hostname "ha-968000-m02"
	I0805 16:09:05.932009    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetMachineName
	I0805 16:09:05.932102    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:05.932202    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:05.932286    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:05.932365    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:05.932456    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:05.932575    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:09:05.932721    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0805 16:09:05.932729    4013 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-968000-m02 && echo "ha-968000-m02" | sudo tee /etc/hostname
	I0805 16:09:05.993192    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-968000-m02
	
	I0805 16:09:05.993215    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:05.993338    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:05.993436    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:05.993511    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:05.993594    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:05.993723    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:09:05.993859    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0805 16:09:05.993871    4013 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-968000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-968000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-968000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:09:06.050566    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:09:06.050581    4013 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:09:06.050591    4013 buildroot.go:174] setting up certificates
	I0805 16:09:06.050596    4013 provision.go:84] configureAuth start
	I0805 16:09:06.050603    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetMachineName
	I0805 16:09:06.050733    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetIP
	I0805 16:09:06.050844    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:06.050935    4013 provision.go:143] copyHostCerts
	I0805 16:09:06.050963    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:09:06.051010    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:09:06.051016    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:09:06.051159    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:09:06.051373    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:09:06.051403    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:09:06.051408    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:09:06.051520    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:09:06.051663    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:09:06.051692    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:09:06.051697    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:09:06.051762    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:09:06.051905    4013 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.ha-968000-m02 san=[127.0.0.1 192.169.0.6 ha-968000-m02 localhost minikube]
	I0805 16:09:06.144117    4013 provision.go:177] copyRemoteCerts
	I0805 16:09:06.144168    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:09:06.144182    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:06.144315    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:06.144419    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:06.144519    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:06.144605    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/id_rsa Username:docker}
	I0805 16:09:06.177583    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:09:06.177652    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:09:06.196674    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:09:06.196731    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:09:06.215833    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:09:06.215904    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0805 16:09:06.234708    4013 provision.go:87] duration metric: took 184.105335ms to configureAuth
	I0805 16:09:06.234721    4013 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:09:06.234888    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:09:06.234902    4013 main.go:141] libmachine: (ha-968000-m02) Calling .DriverName
	I0805 16:09:06.235034    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:06.235129    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:06.235219    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:06.235306    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:06.235377    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:06.235486    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:09:06.235620    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0805 16:09:06.235627    4013 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:09:06.286203    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:09:06.286215    4013 buildroot.go:70] root file system type: tmpfs
	I0805 16:09:06.286297    4013 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:09:06.286308    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:06.286429    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:06.286523    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:06.286613    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:06.286698    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:06.286817    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:09:06.286956    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0805 16:09:06.287002    4013 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:09:06.347900    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:09:06.347916    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:06.348060    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:06.348168    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:06.348290    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:06.348380    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:06.348531    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:09:06.348709    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0805 16:09:06.348724    4013 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:09:07.986428    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:09:07.986451    4013 machine.go:97] duration metric: took 13.118346339s to provisionDockerMachine
	I0805 16:09:07.986459    4013 start.go:293] postStartSetup for "ha-968000-m02" (driver="hyperkit")
	I0805 16:09:07.986469    4013 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:09:07.986480    4013 main.go:141] libmachine: (ha-968000-m02) Calling .DriverName
	I0805 16:09:07.986670    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:09:07.986681    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:07.986783    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:07.986882    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:07.986962    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:07.987053    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/id_rsa Username:docker}
	I0805 16:09:08.025708    4013 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:09:08.030674    4013 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:09:08.030690    4013 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:09:08.030788    4013 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:09:08.030933    4013 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:09:08.030940    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:09:08.031094    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:09:08.040549    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:09:08.073731    4013 start.go:296] duration metric: took 87.255709ms for postStartSetup
	I0805 16:09:08.073758    4013 main.go:141] libmachine: (ha-968000-m02) Calling .DriverName
	I0805 16:09:08.073944    4013 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0805 16:09:08.073958    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:08.074051    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:08.074132    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:08.074215    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:08.074303    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/id_rsa Username:docker}
	I0805 16:09:08.106482    4013 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0805 16:09:08.106540    4013 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0805 16:09:08.160338    4013 fix.go:56] duration metric: took 13.403896455s for fixHost
	I0805 16:09:08.160384    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:08.160527    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:08.160625    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:08.160714    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:08.160794    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:08.160927    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:09:08.161086    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0805 16:09:08.161094    4013 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 16:09:08.212458    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722899348.353849181
	
	I0805 16:09:08.212468    4013 fix.go:216] guest clock: 1722899348.353849181
	I0805 16:09:08.212476    4013 fix.go:229] Guest: 2024-08-05 16:09:08.353849181 -0700 PDT Remote: 2024-08-05 16:09:08.160354 -0700 PDT m=+32.517773342 (delta=193.495181ms)
	I0805 16:09:08.212487    4013 fix.go:200] guest clock delta is within tolerance: 193.495181ms
	I0805 16:09:08.212490    4013 start.go:83] releasing machines lock for "ha-968000-m02", held for 13.45607681s
	I0805 16:09:08.212505    4013 main.go:141] libmachine: (ha-968000-m02) Calling .DriverName
	I0805 16:09:08.212639    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetIP
	I0805 16:09:08.235368    4013 out.go:177] * Found network options:
	I0805 16:09:08.255968    4013 out.go:177]   - NO_PROXY=192.169.0.5
	W0805 16:09:08.277055    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:09:08.277126    4013 main.go:141] libmachine: (ha-968000-m02) Calling .DriverName
	I0805 16:09:08.277962    4013 main.go:141] libmachine: (ha-968000-m02) Calling .DriverName
	I0805 16:09:08.278232    4013 main.go:141] libmachine: (ha-968000-m02) Calling .DriverName
	I0805 16:09:08.278363    4013 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:09:08.278403    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	W0805 16:09:08.278441    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:09:08.278542    4013 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 16:09:08.278561    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHHostname
	I0805 16:09:08.278609    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:08.278735    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHPort
	I0805 16:09:08.278828    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:08.278924    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHKeyPath
	I0805 16:09:08.279039    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:08.279094    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetSSHUsername
	I0805 16:09:08.279296    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/id_rsa Username:docker}
	I0805 16:09:08.279328    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m02/id_rsa Username:docker}
	W0805 16:09:08.308476    4013 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:09:08.308543    4013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:09:08.366966    4013 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:09:08.366989    4013 start.go:495] detecting cgroup driver to use...
	I0805 16:09:08.367106    4013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:09:08.383096    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:09:08.391318    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:09:08.399437    4013 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:09:08.399485    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:09:08.407713    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:09:08.415945    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:09:08.424060    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:09:08.432199    4013 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:09:08.440635    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:09:08.449476    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:09:08.457693    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:09:08.465963    4013 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:09:08.473316    4013 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:09:08.480715    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:09:08.580965    4013 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:09:08.599460    4013 start.go:495] detecting cgroup driver to use...
	I0805 16:09:08.599526    4013 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:09:08.618244    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:09:08.628953    4013 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:09:08.643835    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:09:08.654207    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:09:08.667243    4013 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:09:08.688662    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:09:08.699359    4013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:09:08.714408    4013 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:09:08.717488    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:09:08.724576    4013 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:09:08.738058    4013 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:09:08.841454    4013 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:09:08.945955    4013 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:09:08.945979    4013 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:09:08.960827    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:09:09.064765    4013 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:09:11.412428    4013 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.347643222s)
	I0805 16:09:11.412491    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0805 16:09:11.422964    4013 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0805 16:09:11.435663    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:09:11.446013    4013 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0805 16:09:11.539337    4013 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0805 16:09:11.650058    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:09:11.748634    4013 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0805 16:09:11.762213    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:09:11.773039    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:09:11.872006    4013 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0805 16:09:11.939388    4013 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0805 16:09:11.939480    4013 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0805 16:09:11.943952    4013 start.go:563] Will wait 60s for crictl version
	I0805 16:09:11.944006    4013 ssh_runner.go:195] Run: which crictl
	I0805 16:09:11.947391    4013 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 16:09:11.980231    4013 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0805 16:09:11.980302    4013 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:09:11.997853    4013 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:09:12.060154    4013 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0805 16:09:12.080904    4013 out.go:177]   - env NO_PROXY=192.169.0.5
	I0805 16:09:12.102334    4013 main.go:141] libmachine: (ha-968000-m02) Calling .GetIP
	I0805 16:09:12.102720    4013 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0805 16:09:12.107517    4013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:09:12.117349    4013 mustload.go:65] Loading cluster: ha-968000
	I0805 16:09:12.117532    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:09:12.117765    4013 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:09:12.117781    4013 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:09:12.126279    4013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51930
	I0805 16:09:12.126593    4013 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:09:12.126941    4013 main.go:141] libmachine: Using API Version  1
	I0805 16:09:12.126959    4013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:09:12.127183    4013 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:09:12.127284    4013 main.go:141] libmachine: (ha-968000) Calling .GetState
	I0805 16:09:12.127369    4013 main.go:141] libmachine: (ha-968000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:09:12.127424    4013 main.go:141] libmachine: (ha-968000) DBG | hyperkit pid from json: 4025
	I0805 16:09:12.128374    4013 host.go:66] Checking if "ha-968000" exists ...
	I0805 16:09:12.128663    4013 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:09:12.128678    4013 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:09:12.137093    4013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51932
	I0805 16:09:12.137400    4013 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:09:12.137721    4013 main.go:141] libmachine: Using API Version  1
	I0805 16:09:12.137731    4013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:09:12.137942    4013 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:09:12.138052    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:09:12.138149    4013 certs.go:68] Setting up /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000 for IP: 192.169.0.6
	I0805 16:09:12.138156    4013 certs.go:194] generating shared ca certs ...
	I0805 16:09:12.138169    4013 certs.go:226] acquiring lock for ca certs: {Name:mkb83e058d89c7d4e66f4136f377a3c305b13735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:09:12.138309    4013 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key
	I0805 16:09:12.138365    4013 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key
	I0805 16:09:12.138373    4013 certs.go:256] generating profile certs ...
	I0805 16:09:12.138477    4013 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/client.key
	I0805 16:09:12.138565    4013 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key.77dc068d
	I0805 16:09:12.138631    4013 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.key
	I0805 16:09:12.138639    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 16:09:12.138660    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 16:09:12.138681    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 16:09:12.138700    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 16:09:12.138717    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 16:09:12.138735    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 16:09:12.138754    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 16:09:12.138776    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 16:09:12.138855    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem (1338 bytes)
	W0805 16:09:12.138895    4013 certs.go:480] ignoring /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678_empty.pem, impossibly tiny 0 bytes
	I0805 16:09:12.138904    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 16:09:12.138940    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem (1082 bytes)
	I0805 16:09:12.138974    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem (1123 bytes)
	I0805 16:09:12.139009    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem (1675 bytes)
	I0805 16:09:12.139074    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:09:12.139106    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:09:12.139125    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem -> /usr/share/ca-certificates/1678.pem
	I0805 16:09:12.139142    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /usr/share/ca-certificates/16782.pem
	I0805 16:09:12.139167    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:09:12.139259    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:09:12.139346    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:09:12.139430    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:09:12.139498    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/id_rsa Username:docker}
	I0805 16:09:12.171916    4013 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0805 16:09:12.175290    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0805 16:09:12.184095    4013 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0805 16:09:12.187128    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0805 16:09:12.195868    4013 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0805 16:09:12.198915    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0805 16:09:12.208072    4013 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0805 16:09:12.211239    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0805 16:09:12.220236    4013 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0805 16:09:12.223357    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0805 16:09:12.231812    4013 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0805 16:09:12.234916    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0805 16:09:12.243760    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 16:09:12.264594    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 16:09:12.284204    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 16:09:12.304172    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0805 16:09:12.324282    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0805 16:09:12.344243    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 16:09:12.363682    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 16:09:12.383391    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 16:09:12.403042    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 16:09:12.422963    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem --> /usr/share/ca-certificates/1678.pem (1338 bytes)
	I0805 16:09:12.442422    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /usr/share/ca-certificates/16782.pem (1708 bytes)
	I0805 16:09:12.462071    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0805 16:09:12.476035    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0805 16:09:12.489609    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0805 16:09:12.502965    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0805 16:09:12.516617    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0805 16:09:12.530178    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0805 16:09:12.543803    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0805 16:09:12.557186    4013 ssh_runner.go:195] Run: openssl version
	I0805 16:09:12.561690    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1678.pem && ln -fs /usr/share/ca-certificates/1678.pem /etc/ssl/certs/1678.pem"
	I0805 16:09:12.570469    4013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1678.pem
	I0805 16:09:12.573916    4013 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 22:58 /usr/share/ca-certificates/1678.pem
	I0805 16:09:12.573968    4013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1678.pem
	I0805 16:09:12.578325    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1678.pem /etc/ssl/certs/51391683.0"
	I0805 16:09:12.586655    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16782.pem && ln -fs /usr/share/ca-certificates/16782.pem /etc/ssl/certs/16782.pem"
	I0805 16:09:12.595266    4013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16782.pem
	I0805 16:09:12.598773    4013 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 22:58 /usr/share/ca-certificates/16782.pem
	I0805 16:09:12.598808    4013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16782.pem
	I0805 16:09:12.603106    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16782.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 16:09:12.611770    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 16:09:12.620276    4013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:09:12.623836    4013 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:09:12.623874    4013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:09:12.628099    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 16:09:12.636558    4013 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 16:09:12.640104    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 16:09:12.644367    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 16:09:12.648558    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 16:09:12.653002    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 16:09:12.657413    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 16:09:12.661571    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 16:09:12.665817    4013 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.30.3 docker true true} ...
	I0805 16:09:12.665880    4013 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-968000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-968000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 16:09:12.665898    4013 kube-vip.go:115] generating kube-vip config ...
	I0805 16:09:12.665932    4013 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0805 16:09:12.678633    4013 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0805 16:09:12.678672    4013 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0805 16:09:12.678725    4013 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 16:09:12.686682    4013 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 16:09:12.686732    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0805 16:09:12.694235    4013 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0805 16:09:12.708178    4013 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 16:09:12.721592    4013 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0805 16:09:12.735241    4013 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0805 16:09:12.738251    4013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:09:12.747938    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:09:12.839333    4013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:09:12.855307    4013 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:09:12.855486    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:09:12.876653    4013 out.go:177] * Verifying Kubernetes components...
	I0805 16:09:12.918406    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:09:13.043139    4013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:09:13.061746    4013 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:09:13.061950    4013 kapi.go:59] client config for ha-968000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x85c5060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0805 16:09:13.061990    4013 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0805 16:09:13.062163    4013 node_ready.go:35] waiting up to 6m0s for node "ha-968000-m02" to be "Ready" ...
	I0805 16:09:13.062248    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:09:13.062253    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:13.062261    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:13.062265    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.259366    4013 round_trippers.go:574] Response Status: 200 OK in 8197 milliseconds
	I0805 16:09:21.260575    4013 node_ready.go:49] node "ha-968000-m02" has status "Ready":"True"
	I0805 16:09:21.260589    4013 node_ready.go:38] duration metric: took 8.198406493s for node "ha-968000-m02" to be "Ready" ...
	I0805 16:09:21.260596    4013 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:09:21.260646    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0805 16:09:21.260653    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.260660    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.260665    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.302891    4013 round_trippers.go:574] Response Status: 200 OK in 42 milliseconds
	I0805 16:09:21.310518    4013 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hjp5z" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.310596    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hjp5z
	I0805 16:09:21.310619    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.310632    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.310639    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.313152    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:21.313881    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:21.313892    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.313899    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.313902    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.317700    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:21.318187    4013 pod_ready.go:92] pod "coredns-7db6d8ff4d-hjp5z" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:21.318198    4013 pod_ready.go:81] duration metric: took 7.662792ms for pod "coredns-7db6d8ff4d-hjp5z" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.318207    4013 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.318250    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:09:21.318256    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.318263    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.318268    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.326180    4013 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0805 16:09:21.326741    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:21.326750    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.326758    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.326763    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.331849    4013 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0805 16:09:21.332344    4013 pod_ready.go:92] pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:21.332356    4013 pod_ready.go:81] duration metric: took 14.143254ms for pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.332364    4013 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.332409    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-968000
	I0805 16:09:21.332416    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.332423    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.332426    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.335622    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:21.335995    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:21.336004    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.336019    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.336025    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.339965    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:21.340276    4013 pod_ready.go:92] pod "etcd-ha-968000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:21.340287    4013 pod_ready.go:81] duration metric: took 7.918315ms for pod "etcd-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.340295    4013 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.340346    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-968000-m02
	I0805 16:09:21.340352    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.340359    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.340365    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.342503    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:21.343015    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:09:21.343024    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.343031    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.343036    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.346019    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:21.346517    4013 pod_ready.go:92] pod "etcd-ha-968000-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:21.346530    4013 pod_ready.go:81] duration metric: took 6.229187ms for pod "etcd-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.346558    4013 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.346618    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-968000-m03
	I0805 16:09:21.346625    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.346633    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.346638    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.351435    4013 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 16:09:21.461654    4013 request.go:629] Waited for 109.640417ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:09:21.461696    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:09:21.461703    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.461709    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.461715    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.465496    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:21.465774    4013 pod_ready.go:92] pod "etcd-ha-968000-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:21.465784    4013 pod_ready.go:81] duration metric: took 119.216409ms for pod "etcd-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.465817    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.661090    4013 request.go:629] Waited for 195.188408ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-968000
	I0805 16:09:21.661122    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-968000
	I0805 16:09:21.661127    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.661133    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.661136    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.663700    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:21.860705    4013 request.go:629] Waited for 196.382714ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:21.860744    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:21.860750    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:21.860758    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:21.860764    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:21.864103    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:21.864428    4013 pod_ready.go:92] pod "kube-apiserver-ha-968000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:21.864438    4013 pod_ready.go:81] duration metric: took 398.612841ms for pod "kube-apiserver-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:21.864448    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:22.062331    4013 request.go:629] Waited for 197.82051ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-968000-m02
	I0805 16:09:22.062511    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-968000-m02
	I0805 16:09:22.062523    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:22.062533    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:22.062539    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:22.065766    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:22.262057    4013 request.go:629] Waited for 195.681075ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:09:22.262125    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:09:22.262130    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:22.262137    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:22.262140    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:22.264946    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:22.265310    4013 pod_ready.go:92] pod "kube-apiserver-ha-968000-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:22.265318    4013 pod_ready.go:81] duration metric: took 400.862554ms for pod "kube-apiserver-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:22.265325    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:22.460707    4013 request.go:629] Waited for 195.347101ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-968000-m03
	I0805 16:09:22.460759    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-968000-m03
	I0805 16:09:22.460765    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:22.460781    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:22.460785    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:22.464130    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:22.660697    4013 request.go:629] Waited for 196.193657ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:09:22.660729    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:09:22.660736    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:22.660779    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:22.660812    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:22.662931    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:22.663458    4013 pod_ready.go:92] pod "kube-apiserver-ha-968000-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:22.663468    4013 pod_ready.go:81] duration metric: took 398.13793ms for pod "kube-apiserver-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:22.663475    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:22.861064    4013 request.go:629] Waited for 197.549417ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000
	I0805 16:09:22.861116    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000
	I0805 16:09:22.861124    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:22.861131    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:22.861137    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:22.863357    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:23.060775    4013 request.go:629] Waited for 196.997441ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:23.060838    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:23.060844    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:23.060850    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:23.060854    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:23.062638    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:09:23.062947    4013 pod_ready.go:92] pod "kube-controller-manager-ha-968000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:23.062956    4013 pod_ready.go:81] duration metric: took 399.47493ms for pod "kube-controller-manager-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:23.062963    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:23.262182    4013 request.go:629] Waited for 199.175443ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m02
	I0805 16:09:23.262278    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m02
	I0805 16:09:23.262289    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:23.262301    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:23.262309    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:23.265274    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:23.460721    4013 request.go:629] Waited for 194.890215ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:09:23.460750    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:09:23.460755    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:23.460761    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:23.460766    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:23.462860    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:23.463267    4013 pod_ready.go:97] node "ha-968000-m02" hosting pod "kube-controller-manager-ha-968000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-968000-m02" has status "Ready":"False"
	I0805 16:09:23.463277    4013 pod_ready.go:81] duration metric: took 400.308105ms for pod "kube-controller-manager-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	E0805 16:09:23.463284    4013 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-968000-m02" hosting pod "kube-controller-manager-ha-968000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-968000-m02" has status "Ready":"False"
	I0805 16:09:23.463290    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:23.662538    4013 request.go:629] Waited for 199.207212ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m03
	I0805 16:09:23.662619    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m03
	I0805 16:09:23.662625    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:23.662631    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:23.662635    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:23.664768    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:23.861796    4013 request.go:629] Waited for 196.439694ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:09:23.861935    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:09:23.861946    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:23.861956    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:23.861962    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:23.865458    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:23.865815    4013 pod_ready.go:92] pod "kube-controller-manager-ha-968000-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:23.865826    4013 pod_ready.go:81] duration metric: took 402.529289ms for pod "kube-controller-manager-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:23.865833    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fvd5q" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:24.061409    4013 request.go:629] Waited for 195.531329ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvd5q
	I0805 16:09:24.061446    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvd5q
	I0805 16:09:24.061452    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:24.061491    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:24.061496    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:24.063747    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:24.261469    4013 request.go:629] Waited for 197.298268ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:09:24.261565    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:09:24.261573    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:24.261581    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:24.261587    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:24.264861    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:24.265277    4013 pod_ready.go:97] node "ha-968000-m02" hosting pod "kube-proxy-fvd5q" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-968000-m02" has status "Ready":"False"
	I0805 16:09:24.265288    4013 pod_ready.go:81] duration metric: took 399.450273ms for pod "kube-proxy-fvd5q" in "kube-system" namespace to be "Ready" ...
	E0805 16:09:24.265296    4013 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-968000-m02" hosting pod "kube-proxy-fvd5q" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-968000-m02" has status "Ready":"False"
	I0805 16:09:24.265301    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p4xgk" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:24.461481    4013 request.go:629] Waited for 196.027245ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p4xgk
	I0805 16:09:24.461559    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p4xgk
	I0805 16:09:24.461578    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:24.461590    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:24.461596    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:24.464886    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:24.661858    4013 request.go:629] Waited for 196.151825ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:09:24.662024    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:09:24.662034    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:24.662044    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:24.662050    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:24.665229    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:24.665765    4013 pod_ready.go:92] pod "kube-proxy-p4xgk" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:24.665774    4013 pod_ready.go:81] duration metric: took 400.467773ms for pod "kube-proxy-p4xgk" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:24.665781    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qptt6" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:24.861504    4013 request.go:629] Waited for 195.677553ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qptt6
	I0805 16:09:24.861566    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qptt6
	I0805 16:09:24.861577    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:24.861588    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:24.861595    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:24.865839    4013 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 16:09:25.061918    4013 request.go:629] Waited for 195.700422ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m04
	I0805 16:09:25.061988    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m04
	I0805 16:09:25.061994    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:25.062000    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:25.062004    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:25.063765    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:09:25.064046    4013 pod_ready.go:92] pod "kube-proxy-qptt6" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:25.064056    4013 pod_ready.go:81] duration metric: took 398.270559ms for pod "kube-proxy-qptt6" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:25.064065    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v87jb" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:25.261506    4013 request.go:629] Waited for 197.352793ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v87jb
	I0805 16:09:25.261554    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v87jb
	I0805 16:09:25.261563    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:25.261573    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:25.261582    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:25.264807    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:25.461565    4013 request.go:629] Waited for 196.17837ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:25.461605    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:25.461613    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:25.461621    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:25.461625    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:25.464575    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:25.464951    4013 pod_ready.go:92] pod "kube-proxy-v87jb" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:25.464960    4013 pod_ready.go:81] duration metric: took 400.887094ms for pod "kube-proxy-v87jb" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:25.464982    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:25.662277    4013 request.go:629] Waited for 197.19961ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000
	I0805 16:09:25.662316    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000
	I0805 16:09:25.662325    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:25.662333    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:25.662339    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:25.664596    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:25.861101    4013 request.go:629] Waited for 196.140125ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:25.861136    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:09:25.861142    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:25.861149    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:25.861155    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:25.863555    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:25.863937    4013 pod_ready.go:92] pod "kube-scheduler-ha-968000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:25.863947    4013 pod_ready.go:81] duration metric: took 398.956028ms for pod "kube-scheduler-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:25.863960    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:26.061952    4013 request.go:629] Waited for 197.955177ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000-m02
	I0805 16:09:26.062048    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000-m02
	I0805 16:09:26.062057    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:26.062065    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:26.062070    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:26.064556    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:26.262140    4013 request.go:629] Waited for 197.126449ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:09:26.262175    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:09:26.262180    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:26.262186    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:26.262190    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:26.264203    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:26.264592    4013 pod_ready.go:97] node "ha-968000-m02" hosting pod "kube-scheduler-ha-968000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-968000-m02" has status "Ready":"False"
	I0805 16:09:26.264603    4013 pod_ready.go:81] duration metric: took 400.638133ms for pod "kube-scheduler-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	E0805 16:09:26.264611    4013 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-968000-m02" hosting pod "kube-scheduler-ha-968000-m02" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-968000-m02" has status "Ready":"False"
	I0805 16:09:26.264615    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:26.461402    4013 request.go:629] Waited for 196.72911ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000-m03
	I0805 16:09:26.461551    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000-m03
	I0805 16:09:26.461563    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:26.461573    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:26.461580    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:26.465124    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:26.661745    4013 request.go:629] Waited for 196.148221ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:09:26.661836    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:09:26.661842    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:26.661848    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:26.661852    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:26.663931    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:09:26.664273    4013 pod_ready.go:92] pod "kube-scheduler-ha-968000-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 16:09:26.664282    4013 pod_ready.go:81] duration metric: took 399.661598ms for pod "kube-scheduler-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:09:26.664289    4013 pod_ready.go:38] duration metric: took 5.403682263s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:09:26.664305    4013 api_server.go:52] waiting for apiserver process to appear ...
	I0805 16:09:26.664365    4013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:09:26.676043    4013 api_server.go:72] duration metric: took 13.820707254s to wait for apiserver process to appear ...
	I0805 16:09:26.676055    4013 api_server.go:88] waiting for apiserver healthz status ...
	I0805 16:09:26.676075    4013 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0805 16:09:26.679244    4013 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0805 16:09:26.679280    4013 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0805 16:09:26.679287    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:26.679294    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:26.679298    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:26.679920    4013 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:09:26.680031    4013 api_server.go:141] control plane version: v1.30.3
	I0805 16:09:26.680044    4013 api_server.go:131] duration metric: took 3.983266ms to wait for apiserver health ...
	I0805 16:09:26.680049    4013 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 16:09:26.861214    4013 request.go:629] Waited for 181.081617ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0805 16:09:26.861259    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0805 16:09:26.861267    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:26.861278    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:26.861307    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:26.876137    4013 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0805 16:09:26.882111    4013 system_pods.go:59] 26 kube-system pods found
	I0805 16:09:26.882136    4013 system_pods.go:61] "coredns-7db6d8ff4d-hjp5z" [e31fd97b-2727-4db3-a17c-3302c320832b] Running
	I0805 16:09:26.882140    4013 system_pods.go:61] "coredns-7db6d8ff4d-mfzln" [ea5c136e-84a6-4253-8f61-85c427b83840] Running
	I0805 16:09:26.882143    4013 system_pods.go:61] "etcd-ha-968000" [24590478-199e-4d78-8312-3d5924d6e915] Running
	I0805 16:09:26.882146    4013 system_pods.go:61] "etcd-ha-968000-m02" [cefe6f5a-3a87-4ccf-9419-0b864275c9c9] Running
	I0805 16:09:26.882149    4013 system_pods.go:61] "etcd-ha-968000-m03" [ec752887-5a12-4888-ba88-3fb5d54c6ce7] Running
	I0805 16:09:26.882151    4013 system_pods.go:61] "kindnet-5dshm" [2641d2a9-a26a-4cbe-b8ea-99ed7c7af43c] Running
	I0805 16:09:26.882153    4013 system_pods.go:61] "kindnet-cglm9" [80a5d2ca-3d9f-4347-bb68-cd6eac4e4aa8] Running
	I0805 16:09:26.882156    4013 system_pods.go:61] "kindnet-fp5ns" [bf9c4454-9491-4a21-8f0a-6c6f21919551] Running
	I0805 16:09:26.882158    4013 system_pods.go:61] "kindnet-qh6l6" [382ac149-5a4e-4fe4-aaaa-9c929c93b101] Running
	I0805 16:09:26.882161    4013 system_pods.go:61] "kube-apiserver-ha-968000" [04e9a721-eb6e-47b4-a7f0-2cad1ee201f7] Running
	I0805 16:09:26.882164    4013 system_pods.go:61] "kube-apiserver-ha-968000-m02" [0465a825-6697-4a98-bb88-18df7929a5dd] Running
	I0805 16:09:26.882166    4013 system_pods.go:61] "kube-apiserver-ha-968000-m03" [a0d3fc83-9820-463e-81bb-2abcb1b4c868] Running
	I0805 16:09:26.882169    4013 system_pods.go:61] "kube-controller-manager-ha-968000" [2078d070-21b4-4d47-a4d3-b130fa8b3aaf] Running
	I0805 16:09:26.882171    4013 system_pods.go:61] "kube-controller-manager-ha-968000-m02" [f0a1cc06-05bb-4efa-9a53-ebccba2b5f9e] Running
	I0805 16:09:26.882174    4013 system_pods.go:61] "kube-controller-manager-ha-968000-m03" [d140abba-93f2-4062-8ee8-3918ff5ae882] Running
	I0805 16:09:26.882176    4013 system_pods.go:61] "kube-proxy-fvd5q" [f2f13535-5802-4a1c-8243-48de42b79e74] Running
	I0805 16:09:26.882179    4013 system_pods.go:61] "kube-proxy-p4xgk" [aaca6036-f95c-44fb-a358-5ac881148fa4] Running
	I0805 16:09:26.882182    4013 system_pods.go:61] "kube-proxy-qptt6" [a826a636-1d05-4cca-a56d-d25a9cf41506] Running
	I0805 16:09:26.882184    4013 system_pods.go:61] "kube-proxy-v87jb" [d98f61ac-3a61-452c-8507-7258a9703c15] Running
	I0805 16:09:26.882188    4013 system_pods.go:61] "kube-scheduler-ha-968000" [20bf4b5e-71a1-4708-bb6a-34b0e44f196d] Running
	I0805 16:09:26.882190    4013 system_pods.go:61] "kube-scheduler-ha-968000-m02" [e590d5bf-9517-433b-9759-5b0f16cfe9a9] Running
	I0805 16:09:26.882193    4013 system_pods.go:61] "kube-scheduler-ha-968000-m03" [91120005-f0b0-47d5-a91c-c06b12e6da3e] Running
	I0805 16:09:26.882197    4013 system_pods.go:61] "kube-vip-ha-968000" [373808d0-e9f2-4cea-a7b6-98b309fac6e7] Running
	I0805 16:09:26.882201    4013 system_pods.go:61] "kube-vip-ha-968000-m02" [713fc36a-5582-464c-82d3-02905c81b753] Running
	I0805 16:09:26.882204    4013 system_pods.go:61] "kube-vip-ha-968000-m03" [d94a7e1c-9ddd-4229-b4cd-ac05384dd20a] Running
	I0805 16:09:26.882207    4013 system_pods.go:61] "storage-provisioner" [52e2952a-756d-4f65-84f5-588cb6563297] Running
	I0805 16:09:26.882211    4013 system_pods.go:74] duration metric: took 202.157859ms to wait for pod list to return data ...
	I0805 16:09:26.882216    4013 default_sa.go:34] waiting for default service account to be created ...
	I0805 16:09:27.061417    4013 request.go:629] Waited for 179.110016ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:09:27.061534    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:09:27.061546    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:27.061557    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:27.061563    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:27.065177    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:27.065383    4013 default_sa.go:45] found service account: "default"
	I0805 16:09:27.065396    4013 default_sa.go:55] duration metric: took 183.174105ms for default service account to be created ...
	I0805 16:09:27.065406    4013 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 16:09:27.262565    4013 request.go:629] Waited for 197.034728ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0805 16:09:27.262625    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0805 16:09:27.262635    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:27.262646    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:27.262654    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:27.268433    4013 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0805 16:09:27.273328    4013 system_pods.go:86] 26 kube-system pods found
	I0805 16:09:27.273339    4013 system_pods.go:89] "coredns-7db6d8ff4d-hjp5z" [e31fd97b-2727-4db3-a17c-3302c320832b] Running
	I0805 16:09:27.273344    4013 system_pods.go:89] "coredns-7db6d8ff4d-mfzln" [ea5c136e-84a6-4253-8f61-85c427b83840] Running
	I0805 16:09:27.273348    4013 system_pods.go:89] "etcd-ha-968000" [24590478-199e-4d78-8312-3d5924d6e915] Running
	I0805 16:09:27.273351    4013 system_pods.go:89] "etcd-ha-968000-m02" [cefe6f5a-3a87-4ccf-9419-0b864275c9c9] Running
	I0805 16:09:27.273354    4013 system_pods.go:89] "etcd-ha-968000-m03" [ec752887-5a12-4888-ba88-3fb5d54c6ce7] Running
	I0805 16:09:27.273358    4013 system_pods.go:89] "kindnet-5dshm" [2641d2a9-a26a-4cbe-b8ea-99ed7c7af43c] Running
	I0805 16:09:27.273361    4013 system_pods.go:89] "kindnet-cglm9" [80a5d2ca-3d9f-4347-bb68-cd6eac4e4aa8] Running
	I0805 16:09:27.273365    4013 system_pods.go:89] "kindnet-fp5ns" [bf9c4454-9491-4a21-8f0a-6c6f21919551] Running
	I0805 16:09:27.273369    4013 system_pods.go:89] "kindnet-qh6l6" [382ac149-5a4e-4fe4-aaaa-9c929c93b101] Running
	I0805 16:09:27.273372    4013 system_pods.go:89] "kube-apiserver-ha-968000" [04e9a721-eb6e-47b4-a7f0-2cad1ee201f7] Running
	I0805 16:09:27.273376    4013 system_pods.go:89] "kube-apiserver-ha-968000-m02" [0465a825-6697-4a98-bb88-18df7929a5dd] Running
	I0805 16:09:27.273380    4013 system_pods.go:89] "kube-apiserver-ha-968000-m03" [a0d3fc83-9820-463e-81bb-2abcb1b4c868] Running
	I0805 16:09:27.273383    4013 system_pods.go:89] "kube-controller-manager-ha-968000" [2078d070-21b4-4d47-a4d3-b130fa8b3aaf] Running
	I0805 16:09:27.273387    4013 system_pods.go:89] "kube-controller-manager-ha-968000-m02" [f0a1cc06-05bb-4efa-9a53-ebccba2b5f9e] Running
	I0805 16:09:27.273393    4013 system_pods.go:89] "kube-controller-manager-ha-968000-m03" [d140abba-93f2-4062-8ee8-3918ff5ae882] Running
	I0805 16:09:27.273398    4013 system_pods.go:89] "kube-proxy-fvd5q" [f2f13535-5802-4a1c-8243-48de42b79e74] Running
	I0805 16:09:27.273401    4013 system_pods.go:89] "kube-proxy-p4xgk" [aaca6036-f95c-44fb-a358-5ac881148fa4] Running
	I0805 16:09:27.273408    4013 system_pods.go:89] "kube-proxy-qptt6" [a826a636-1d05-4cca-a56d-d25a9cf41506] Running
	I0805 16:09:27.273412    4013 system_pods.go:89] "kube-proxy-v87jb" [d98f61ac-3a61-452c-8507-7258a9703c15] Running
	I0805 16:09:27.273415    4013 system_pods.go:89] "kube-scheduler-ha-968000" [20bf4b5e-71a1-4708-bb6a-34b0e44f196d] Running
	I0805 16:09:27.273419    4013 system_pods.go:89] "kube-scheduler-ha-968000-m02" [e590d5bf-9517-433b-9759-5b0f16cfe9a9] Running
	I0805 16:09:27.273422    4013 system_pods.go:89] "kube-scheduler-ha-968000-m03" [91120005-f0b0-47d5-a91c-c06b12e6da3e] Running
	I0805 16:09:27.273426    4013 system_pods.go:89] "kube-vip-ha-968000" [373808d0-e9f2-4cea-a7b6-98b309fac6e7] Running
	I0805 16:09:27.273429    4013 system_pods.go:89] "kube-vip-ha-968000-m02" [713fc36a-5582-464c-82d3-02905c81b753] Running
	I0805 16:09:27.273433    4013 system_pods.go:89] "kube-vip-ha-968000-m03" [d94a7e1c-9ddd-4229-b4cd-ac05384dd20a] Running
	I0805 16:09:27.273450    4013 system_pods.go:89] "storage-provisioner" [52e2952a-756d-4f65-84f5-588cb6563297] Running
	I0805 16:09:27.273458    4013 system_pods.go:126] duration metric: took 208.046004ms to wait for k8s-apps to be running ...
	I0805 16:09:27.273468    4013 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 16:09:27.273520    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:09:27.285035    4013 system_svc.go:56] duration metric: took 11.567511ms WaitForService to wait for kubelet
	I0805 16:09:27.285048    4013 kubeadm.go:582] duration metric: took 14.42971445s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:09:27.285060    4013 node_conditions.go:102] verifying NodePressure condition ...
	I0805 16:09:27.461886    4013 request.go:629] Waited for 176.780844ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0805 16:09:27.461995    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0805 16:09:27.462013    4013 round_trippers.go:469] Request Headers:
	I0805 16:09:27.462026    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:09:27.462035    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:09:27.465297    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:09:27.466219    4013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:09:27.466232    4013 node_conditions.go:123] node cpu capacity is 2
	I0805 16:09:27.466242    4013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:09:27.466246    4013 node_conditions.go:123] node cpu capacity is 2
	I0805 16:09:27.466249    4013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:09:27.466253    4013 node_conditions.go:123] node cpu capacity is 2
	I0805 16:09:27.466256    4013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:09:27.466259    4013 node_conditions.go:123] node cpu capacity is 2
	I0805 16:09:27.466262    4013 node_conditions.go:105] duration metric: took 181.199284ms to run NodePressure ...
	I0805 16:09:27.466271    4013 start.go:241] waiting for startup goroutines ...
	I0805 16:09:27.466288    4013 start.go:255] writing updated cluster config ...
	I0805 16:09:27.488716    4013 out.go:177] 
	I0805 16:09:27.508938    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:09:27.509085    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:09:27.531540    4013 out.go:177] * Starting "ha-968000-m03" control-plane node in "ha-968000" cluster
	I0805 16:09:27.573486    4013 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:09:27.573507    4013 cache.go:56] Caching tarball of preloaded images
	I0805 16:09:27.573613    4013 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:09:27.573623    4013 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:09:27.573701    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:09:27.574588    4013 start.go:360] acquireMachinesLock for ha-968000-m03: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:09:27.574644    4013 start.go:364] duration metric: took 42.919µs to acquireMachinesLock for "ha-968000-m03"
	I0805 16:09:27.574659    4013 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:09:27.574662    4013 fix.go:54] fixHost starting: m03
	I0805 16:09:27.574910    4013 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:09:27.574930    4013 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:09:27.583789    4013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51937
	I0805 16:09:27.584141    4013 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:09:27.584476    4013 main.go:141] libmachine: Using API Version  1
	I0805 16:09:27.584490    4013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:09:27.584707    4013 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:09:27.584816    4013 main.go:141] libmachine: (ha-968000-m03) Calling .DriverName
	I0805 16:09:27.584907    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetState
	I0805 16:09:27.584990    4013 main.go:141] libmachine: (ha-968000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:09:27.585071    4013 main.go:141] libmachine: (ha-968000-m03) DBG | hyperkit pid from json: 3471
	I0805 16:09:27.585977    4013 main.go:141] libmachine: (ha-968000-m03) DBG | hyperkit pid 3471 missing from process table
	I0805 16:09:27.585998    4013 fix.go:112] recreateIfNeeded on ha-968000-m03: state=Stopped err=<nil>
	I0805 16:09:27.586006    4013 main.go:141] libmachine: (ha-968000-m03) Calling .DriverName
	W0805 16:09:27.586083    4013 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:09:27.606653    4013 out.go:177] * Restarting existing hyperkit VM for "ha-968000-m03" ...
	I0805 16:09:27.648666    4013 main.go:141] libmachine: (ha-968000-m03) Calling .Start
	I0805 16:09:27.648869    4013 main.go:141] libmachine: (ha-968000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:09:27.648916    4013 main.go:141] libmachine: (ha-968000-m03) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/hyperkit.pid
	I0805 16:09:27.650524    4013 main.go:141] libmachine: (ha-968000-m03) DBG | hyperkit pid 3471 missing from process table
	I0805 16:09:27.650545    4013 main.go:141] libmachine: (ha-968000-m03) DBG | pid 3471 is in state "Stopped"
	I0805 16:09:27.650562    4013 main.go:141] libmachine: (ha-968000-m03) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/hyperkit.pid...
	I0805 16:09:27.650769    4013 main.go:141] libmachine: (ha-968000-m03) DBG | Using UUID 2e5bd4cb-7666-4039-8bdc-5eded2ad114e
	I0805 16:09:27.679630    4013 main.go:141] libmachine: (ha-968000-m03) DBG | Generated MAC 5e:e5:6c:f1:60:ca
	I0805 16:09:27.679657    4013 main.go:141] libmachine: (ha-968000-m03) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000
	I0805 16:09:27.679792    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2e5bd4cb-7666-4039-8bdc-5eded2ad114e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003acae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:09:27.679833    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2e5bd4cb-7666-4039-8bdc-5eded2ad114e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003acae0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:09:27.679876    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2e5bd4cb-7666-4039-8bdc-5eded2ad114e", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/ha-968000-m03.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machine
s/ha-968000-m03/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000"}
	I0805 16:09:27.679918    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2e5bd4cb-7666-4039-8bdc-5eded2ad114e -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/ha-968000-m03.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000"
	I0805 16:09:27.679930    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:09:27.681441    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 DEBUG: hyperkit: Pid is 4050
	I0805 16:09:27.681855    4013 main.go:141] libmachine: (ha-968000-m03) DBG | Attempt 0
	I0805 16:09:27.681870    4013 main.go:141] libmachine: (ha-968000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:09:27.681942    4013 main.go:141] libmachine: (ha-968000-m03) DBG | hyperkit pid from json: 4050
	I0805 16:09:27.684086    4013 main.go:141] libmachine: (ha-968000-m03) DBG | Searching for 5e:e5:6c:f1:60:ca in /var/db/dhcpd_leases ...
	I0805 16:09:27.684171    4013 main.go:141] libmachine: (ha-968000-m03) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0805 16:09:27.684192    4013 main.go:141] libmachine: (ha-968000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:09:27.684213    4013 main.go:141] libmachine: (ha-968000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2acfd}
	I0805 16:09:27.684223    4013 main.go:141] libmachine: (ha-968000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b15b5a}
	I0805 16:09:27.684257    4013 main.go:141] libmachine: (ha-968000-m03) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b2ac1c}
	I0805 16:09:27.684275    4013 main.go:141] libmachine: (ha-968000-m03) DBG | Found match: 5e:e5:6c:f1:60:ca
	I0805 16:09:27.684281    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetConfigRaw
	I0805 16:09:27.684302    4013 main.go:141] libmachine: (ha-968000-m03) DBG | IP: 192.169.0.7
	I0805 16:09:27.684999    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetIP
	I0805 16:09:27.685240    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:09:27.685658    4013 machine.go:94] provisionDockerMachine start ...
	I0805 16:09:27.685674    4013 main.go:141] libmachine: (ha-968000-m03) Calling .DriverName
	I0805 16:09:27.685796    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:09:27.685888    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:09:27.685972    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:09:27.686054    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:09:27.686136    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:09:27.686243    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:09:27.686399    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0805 16:09:27.686406    4013 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 16:09:27.689026    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:09:27.697927    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:09:27.698811    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:09:27.698833    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:09:27.698857    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:09:27.698876    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:27 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:09:28.083003    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:09:28.083019    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:09:28.198118    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:09:28.198136    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:09:28.198156    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:09:28.198170    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:28 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:09:28.198987    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:28 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:09:28.198999    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:28 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:09:33.906297    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:33 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:09:33.906335    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:33 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:09:33.906345    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:33 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:09:33.929592    4013 main.go:141] libmachine: (ha-968000-m03) DBG | 2024/08/05 16:09:33 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:10:02.753110    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 16:10:02.753128    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetMachineName
	I0805 16:10:02.753270    4013 buildroot.go:166] provisioning hostname "ha-968000-m03"
	I0805 16:10:02.753282    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetMachineName
	I0805 16:10:02.753381    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:02.753472    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:02.753543    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:02.753631    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:02.753716    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:02.753836    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:10:02.753997    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0805 16:10:02.754006    4013 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-968000-m03 && echo "ha-968000-m03" | sudo tee /etc/hostname
	I0805 16:10:02.815926    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-968000-m03
	
	I0805 16:10:02.815941    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:02.816075    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:02.816178    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:02.816265    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:02.816353    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:02.816497    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:10:02.816655    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0805 16:10:02.816667    4013 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-968000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-968000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-968000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:10:02.874015    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:10:02.874031    4013 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:10:02.874040    4013 buildroot.go:174] setting up certificates
	I0805 16:10:02.874046    4013 provision.go:84] configureAuth start
	I0805 16:10:02.874053    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetMachineName
	I0805 16:10:02.874189    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetIP
	I0805 16:10:02.874289    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:02.874374    4013 provision.go:143] copyHostCerts
	I0805 16:10:02.874402    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:10:02.874450    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:10:02.874455    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:10:02.874582    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:10:02.874781    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:10:02.874825    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:10:02.874830    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:10:02.874901    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:10:02.875047    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:10:02.875075    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:10:02.875079    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:10:02.875146    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:10:02.875295    4013 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.ha-968000-m03 san=[127.0.0.1 192.169.0.7 ha-968000-m03 localhost minikube]
	I0805 16:10:03.100424    4013 provision.go:177] copyRemoteCerts
	I0805 16:10:03.100475    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:10:03.100489    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:03.100628    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:03.100734    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:03.100820    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:03.100908    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/id_rsa Username:docker}
	I0805 16:10:03.133644    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:10:03.133711    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:10:03.152881    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:10:03.152956    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0805 16:10:03.172153    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:10:03.172226    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 16:10:03.192347    4013 provision.go:87] duration metric: took 318.292468ms to configureAuth
	I0805 16:10:03.192362    4013 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:10:03.192542    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:10:03.192555    4013 main.go:141] libmachine: (ha-968000-m03) Calling .DriverName
	I0805 16:10:03.192694    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:03.192785    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:03.192880    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:03.192966    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:03.193041    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:03.193164    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:10:03.193316    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0805 16:10:03.193325    4013 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:10:03.244032    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:10:03.244045    4013 buildroot.go:70] root file system type: tmpfs
	I0805 16:10:03.244123    4013 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:10:03.244135    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:03.244259    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:03.244342    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:03.244429    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:03.244514    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:03.244643    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:10:03.244779    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0805 16:10:03.244826    4013 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:10:03.306704    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:10:03.306723    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:03.306859    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:03.306950    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:03.307037    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:03.307124    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:03.307256    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:10:03.307400    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0805 16:10:03.307414    4013 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:10:04.932560    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:10:04.932575    4013 machine.go:97] duration metric: took 37.246896971s to provisionDockerMachine
	I0805 16:10:04.932584    4013 start.go:293] postStartSetup for "ha-968000-m03" (driver="hyperkit")
	I0805 16:10:04.932592    4013 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:10:04.932606    4013 main.go:141] libmachine: (ha-968000-m03) Calling .DriverName
	I0805 16:10:04.932806    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:10:04.932820    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:04.932921    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:04.933017    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:04.933114    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:04.933199    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/id_rsa Username:docker}
	I0805 16:10:04.965742    4013 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:10:04.968779    4013 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:10:04.968789    4013 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:10:04.968872    4013 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:10:04.969009    4013 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:10:04.969015    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:10:04.969171    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:10:04.977326    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:10:04.996442    4013 start.go:296] duration metric: took 63.849242ms for postStartSetup
	I0805 16:10:04.996464    4013 main.go:141] libmachine: (ha-968000-m03) Calling .DriverName
	I0805 16:10:04.996645    4013 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0805 16:10:04.996658    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:04.996749    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:04.996835    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:04.996919    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:04.996988    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/id_rsa Username:docker}
	I0805 16:10:05.029923    4013 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0805 16:10:05.029990    4013 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0805 16:10:05.062439    4013 fix.go:56] duration metric: took 37.48776057s for fixHost
	I0805 16:10:05.062463    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:05.062605    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:05.062687    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:05.062782    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:05.062875    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:05.062995    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:10:05.063135    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.7 22 <nil> <nil>}
	I0805 16:10:05.063142    4013 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 16:10:05.114144    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722899405.020487015
	
	I0805 16:10:05.114159    4013 fix.go:216] guest clock: 1722899405.020487015
	I0805 16:10:05.114164    4013 fix.go:229] Guest: 2024-08-05 16:10:05.020487015 -0700 PDT Remote: 2024-08-05 16:10:05.062453 -0700 PDT m=+89.419854401 (delta=-41.965985ms)
	I0805 16:10:05.114175    4013 fix.go:200] guest clock delta is within tolerance: -41.965985ms
	I0805 16:10:05.114179    4013 start.go:83] releasing machines lock for "ha-968000-m03", held for 37.53951612s
	I0805 16:10:05.114196    4013 main.go:141] libmachine: (ha-968000-m03) Calling .DriverName
	I0805 16:10:05.114320    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetIP
	I0805 16:10:05.154856    4013 out.go:177] * Found network options:
	I0805 16:10:05.196438    4013 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0805 16:10:05.217521    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	W0805 16:10:05.217542    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:10:05.217557    4013 main.go:141] libmachine: (ha-968000-m03) Calling .DriverName
	I0805 16:10:05.218022    4013 main.go:141] libmachine: (ha-968000-m03) Calling .DriverName
	I0805 16:10:05.218155    4013 main.go:141] libmachine: (ha-968000-m03) Calling .DriverName
	I0805 16:10:05.218244    4013 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:10:05.218267    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	W0805 16:10:05.218289    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	W0805 16:10:05.218305    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:10:05.218380    4013 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 16:10:05.218396    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:10:05.218397    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:05.218547    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:10:05.218562    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:05.218682    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:10:05.218701    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:05.218796    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/id_rsa Username:docker}
	I0805 16:10:05.218817    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:10:05.218922    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/id_rsa Username:docker}
	W0805 16:10:05.247739    4013 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:10:05.247807    4013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:10:05.295633    4013 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:10:05.295651    4013 start.go:495] detecting cgroup driver to use...
	I0805 16:10:05.295736    4013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:10:05.311187    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:10:05.320167    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:10:05.328956    4013 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:10:05.329006    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:10:05.337987    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:10:05.346989    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:10:05.356292    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:10:05.365468    4013 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:10:05.374794    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:10:05.383659    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:10:05.392613    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:10:05.401497    4013 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:10:05.409761    4013 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:10:05.417735    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:10:05.522068    4013 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:10:05.541086    4013 start.go:495] detecting cgroup driver to use...
	I0805 16:10:05.541154    4013 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:10:05.560931    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:10:05.572370    4013 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:10:05.590083    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:10:05.601381    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:10:05.612999    4013 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:10:05.640303    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:10:05.651924    4013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:10:05.666834    4013 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:10:05.669785    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:10:05.677888    4013 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:10:05.691535    4013 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:10:05.794601    4013 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:10:05.896489    4013 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:10:05.896516    4013 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:10:05.916844    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:10:06.013180    4013 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:10:08.281931    4013 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.2687312s)
	I0805 16:10:08.281998    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0805 16:10:08.292879    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:10:08.303134    4013 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0805 16:10:08.403828    4013 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0805 16:10:08.520343    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:10:08.633419    4013 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0805 16:10:08.648137    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:10:08.659447    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:10:08.754463    4013 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0805 16:10:08.821178    4013 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0805 16:10:08.821256    4013 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0805 16:10:08.825268    4013 start.go:563] Will wait 60s for crictl version
	I0805 16:10:08.825311    4013 ssh_runner.go:195] Run: which crictl
	I0805 16:10:08.828380    4013 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 16:10:08.856405    4013 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0805 16:10:08.856477    4013 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:10:08.873070    4013 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:10:08.917245    4013 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0805 16:10:08.958050    4013 out.go:177]   - env NO_PROXY=192.169.0.5
	I0805 16:10:08.978959    4013 out.go:177]   - env NO_PROXY=192.169.0.5,192.169.0.6
	I0805 16:10:08.999958    4013 main.go:141] libmachine: (ha-968000-m03) Calling .GetIP
	I0805 16:10:09.000163    4013 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0805 16:10:09.003143    4013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:10:09.012521    4013 mustload.go:65] Loading cluster: ha-968000
	I0805 16:10:09.012700    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:10:09.012919    4013 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:10:09.012941    4013 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:10:09.021950    4013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51959
	I0805 16:10:09.022290    4013 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:10:09.022650    4013 main.go:141] libmachine: Using API Version  1
	I0805 16:10:09.022672    4013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:10:09.022912    4013 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:10:09.023042    4013 main.go:141] libmachine: (ha-968000) Calling .GetState
	I0805 16:10:09.023120    4013 main.go:141] libmachine: (ha-968000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:10:09.023210    4013 main.go:141] libmachine: (ha-968000) DBG | hyperkit pid from json: 4025
	I0805 16:10:09.024146    4013 host.go:66] Checking if "ha-968000" exists ...
	I0805 16:10:09.024412    4013 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:10:09.024436    4013 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:10:09.033094    4013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51961
	I0805 16:10:09.033420    4013 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:10:09.033772    4013 main.go:141] libmachine: Using API Version  1
	I0805 16:10:09.033792    4013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:10:09.034017    4013 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:10:09.034135    4013 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:10:09.034227    4013 certs.go:68] Setting up /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000 for IP: 192.169.0.7
	I0805 16:10:09.034233    4013 certs.go:194] generating shared ca certs ...
	I0805 16:10:09.034246    4013 certs.go:226] acquiring lock for ca certs: {Name:mkb83e058d89c7d4e66f4136f377a3c305b13735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:10:09.034388    4013 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key
	I0805 16:10:09.034442    4013 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key
	I0805 16:10:09.034452    4013 certs.go:256] generating profile certs ...
	I0805 16:10:09.034546    4013 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/client.key
	I0805 16:10:09.034648    4013 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key.526236ea
	I0805 16:10:09.034697    4013 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.key
	I0805 16:10:09.034704    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 16:10:09.034725    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 16:10:09.034745    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 16:10:09.034764    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 16:10:09.034786    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 16:10:09.034809    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 16:10:09.034828    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 16:10:09.034845    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 16:10:09.034929    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem (1338 bytes)
	W0805 16:10:09.034968    4013 certs.go:480] ignoring /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678_empty.pem, impossibly tiny 0 bytes
	I0805 16:10:09.034982    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 16:10:09.035017    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem (1082 bytes)
	I0805 16:10:09.035050    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem (1123 bytes)
	I0805 16:10:09.035079    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem (1675 bytes)
	I0805 16:10:09.035147    4013 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:10:09.035187    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:10:09.035213    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem -> /usr/share/ca-certificates/1678.pem
	I0805 16:10:09.035232    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /usr/share/ca-certificates/16782.pem
	I0805 16:10:09.035261    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:10:09.035348    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:10:09.035432    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:10:09.035523    4013 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:10:09.035597    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/id_rsa Username:docker}
	I0805 16:10:09.068818    4013 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0805 16:10:09.072729    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0805 16:10:09.083911    4013 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0805 16:10:09.087068    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0805 16:10:09.096135    4013 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0805 16:10:09.099562    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0805 16:10:09.109334    4013 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0805 16:10:09.112743    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0805 16:10:09.122244    4013 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0805 16:10:09.125580    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0805 16:10:09.134471    4013 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0805 16:10:09.137936    4013 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0805 16:10:09.147798    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 16:10:09.168268    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 16:10:09.188512    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 16:10:09.208613    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0805 16:10:09.229102    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0805 16:10:09.248927    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0805 16:10:09.269438    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 16:10:09.289326    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0805 16:10:09.309414    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 16:10:09.329327    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem --> /usr/share/ca-certificates/1678.pem (1338 bytes)
	I0805 16:10:09.349275    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /usr/share/ca-certificates/16782.pem (1708 bytes)
	I0805 16:10:09.369465    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0805 16:10:09.383270    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0805 16:10:09.397217    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0805 16:10:09.410973    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0805 16:10:09.424636    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0805 16:10:09.438657    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0805 16:10:09.453241    4013 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0805 16:10:09.467220    4013 ssh_runner.go:195] Run: openssl version
	I0805 16:10:09.471496    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 16:10:09.479975    4013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:10:09.483494    4013 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:10:09.483535    4013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:10:09.487639    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 16:10:09.496028    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1678.pem && ln -fs /usr/share/ca-certificates/1678.pem /etc/ssl/certs/1678.pem"
	I0805 16:10:09.504248    4013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1678.pem
	I0805 16:10:09.507546    4013 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 22:58 /usr/share/ca-certificates/1678.pem
	I0805 16:10:09.507582    4013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1678.pem
	I0805 16:10:09.511833    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1678.pem /etc/ssl/certs/51391683.0"
	I0805 16:10:09.520110    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16782.pem && ln -fs /usr/share/ca-certificates/16782.pem /etc/ssl/certs/16782.pem"
	I0805 16:10:09.528467    4013 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16782.pem
	I0805 16:10:09.531788    4013 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 22:58 /usr/share/ca-certificates/16782.pem
	I0805 16:10:09.531831    4013 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16782.pem
	I0805 16:10:09.536023    4013 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16782.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 16:10:09.544245    4013 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 16:10:09.547794    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 16:10:09.552109    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 16:10:09.556303    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 16:10:09.560442    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 16:10:09.564725    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 16:10:09.569207    4013 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 16:10:09.573628    4013 kubeadm.go:934] updating node {m03 192.169.0.7 8443 v1.30.3 docker true true} ...
	I0805 16:10:09.573688    4013 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-968000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-968000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 16:10:09.573706    4013 kube-vip.go:115] generating kube-vip config ...
	I0805 16:10:09.573746    4013 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0805 16:10:09.586333    4013 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0805 16:10:09.586392    4013 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0805 16:10:09.586454    4013 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 16:10:09.595015    4013 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 16:10:09.595072    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0805 16:10:09.604755    4013 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0805 16:10:09.618293    4013 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 16:10:09.632089    4013 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0805 16:10:09.645814    4013 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0805 16:10:09.648794    4013 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:10:09.658221    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:10:09.755214    4013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:10:09.770035    4013 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.169.0.7 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:10:09.770231    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:10:09.791589    4013 out.go:177] * Verifying Kubernetes components...
	I0805 16:10:09.812147    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:10:09.922409    4013 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:10:09.937680    4013 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:10:09.937905    4013 kapi.go:59] client config for ha-968000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x85c5060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0805 16:10:09.937943    4013 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0805 16:10:09.938123    4013 node_ready.go:35] waiting up to 6m0s for node "ha-968000-m03" to be "Ready" ...
	I0805 16:10:09.938166    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:09.938171    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:09.938177    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:09.938184    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:09.940537    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:09.940846    4013 node_ready.go:49] node "ha-968000-m03" has status "Ready":"True"
	I0805 16:10:09.940856    4013 node_ready.go:38] duration metric: took 2.724361ms for node "ha-968000-m03" to be "Ready" ...
	I0805 16:10:09.940863    4013 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:10:09.940900    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0805 16:10:09.940905    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:09.940911    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:09.940915    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:09.945944    4013 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0805 16:10:09.953862    4013 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-hjp5z" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:09.953919    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-hjp5z
	I0805 16:10:09.953924    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:09.953930    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:09.953934    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:09.956348    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:09.956979    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:09.956988    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:09.956994    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:09.956998    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:09.959221    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:09.959622    4013 pod_ready.go:92] pod "coredns-7db6d8ff4d-hjp5z" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:09.959632    4013 pod_ready.go:81] duration metric: took 5.75325ms for pod "coredns-7db6d8ff4d-hjp5z" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:09.959646    4013 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:09.959683    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:09.959688    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:09.959693    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:09.959697    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:09.961820    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:09.962245    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:09.962252    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:09.962258    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:09.962262    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:09.964245    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:10.460326    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:10.460341    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:10.460347    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:10.460351    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:10.462931    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:10.463525    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:10.463534    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:10.463540    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:10.463545    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:10.465741    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:10.960459    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:10.960479    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:10.960487    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:10.960490    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:10.964999    4013 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 16:10:10.965521    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:10.965531    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:10.965538    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:10.965541    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:10.968401    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:11.459862    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:11.459879    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:11.459888    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:11.459896    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:11.462705    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:11.463338    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:11.463348    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:11.463355    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:11.463359    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:11.465847    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:11.960724    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:11.960741    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:11.960748    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:11.960751    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:11.963442    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:11.963893    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:11.963902    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:11.963909    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:11.963915    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:11.966015    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:11.966351    4013 pod_ready.go:102] pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace has status "Ready":"False"
	I0805 16:10:12.460750    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:12.460767    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:12.460775    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:12.460780    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:12.463726    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:12.464380    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:12.464390    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:12.464397    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:12.464403    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:12.466771    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:12.959777    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:12.959794    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:12.959800    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:12.959803    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:12.963016    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:12.963521    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:12.963530    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:12.963537    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:12.963541    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:12.965964    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:13.461027    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:13.461044    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:13.461052    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:13.461056    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:13.463804    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:13.464772    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:13.464781    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:13.464789    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:13.464792    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:13.467029    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:13.961022    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:13.961082    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:13.961090    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:13.961093    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:13.963530    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:13.964018    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:13.964026    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:13.964037    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:13.964040    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:13.966396    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:13.966704    4013 pod_ready.go:102] pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace has status "Ready":"False"
	I0805 16:10:14.460972    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:14.461029    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:14.461037    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:14.461040    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:14.463269    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:14.463827    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:14.463834    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:14.463840    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:14.463844    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:14.465651    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:14.960796    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:14.960810    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:14.960817    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:14.960821    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:14.963503    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:14.964069    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:14.964076    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:14.964082    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:14.964085    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:14.965973    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:15.460976    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:15.461042    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:15.461054    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:15.461062    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:15.464639    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:15.465242    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:15.465250    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:15.465255    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:15.465259    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:15.467095    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:15.960558    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:15.960569    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:15.960575    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:15.960579    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:15.962733    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:15.963261    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:15.963268    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:15.963274    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:15.963278    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:15.964836    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:16.460120    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:16.460142    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:16.460150    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:16.460154    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:16.462634    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:16.463246    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:16.463254    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:16.463260    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:16.463264    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:16.464841    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:16.465283    4013 pod_ready.go:102] pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace has status "Ready":"False"
	I0805 16:10:16.959766    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:16.959781    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:16.959789    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:16.959792    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:16.962161    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:16.962538    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:16.962546    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:16.962551    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:16.962554    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:16.964199    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:17.459940    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:17.460028    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:17.460043    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:17.460058    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:17.463177    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:17.463929    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:17.463939    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:17.463947    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:17.463954    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:17.465814    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:17.960492    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:17.960517    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:17.960529    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:17.960535    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:17.963854    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:17.964340    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:17.964348    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:17.964354    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:17.964359    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:17.965846    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:18.459859    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:18.459922    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:18.459934    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:18.459943    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:18.463097    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:18.463745    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:18.463756    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:18.463764    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:18.463769    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:18.466108    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:18.466647    4013 pod_ready.go:102] pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace has status "Ready":"False"
	I0805 16:10:18.961260    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:18.961336    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:18.961346    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:18.961351    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:18.964473    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:18.964862    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:18.964870    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:18.964876    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:18.964879    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:18.966810    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:19.461327    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:19.461342    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:19.461349    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:19.461352    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:19.463586    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:19.464052    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:19.464061    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:19.464067    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:19.464071    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:19.465827    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:19.959893    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:19.959916    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:19.959928    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:19.959936    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:19.963708    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:19.964323    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:19.964330    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:19.964337    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:19.964341    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:19.966276    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:20.460973    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:20.460999    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:20.461012    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:20.461019    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:20.464211    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:20.464772    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:20.464780    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:20.464786    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:20.464790    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:20.466297    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:20.466755    4013 pod_ready.go:102] pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace has status "Ready":"False"
	I0805 16:10:20.960914    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:20.960928    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:20.960937    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:20.960940    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:20.963464    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:20.963838    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:20.963846    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:20.963851    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:20.963855    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:20.965570    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:21.461564    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:21.461601    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:21.461612    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:21.461617    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:21.464031    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:21.464425    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:21.464433    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:21.464439    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:21.464442    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:21.466022    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:21.960219    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:21.960247    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:21.960261    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:21.960271    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:21.963797    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:21.964415    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:21.964422    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:21.964428    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:21.964431    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:21.966018    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:22.460781    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:22.460829    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:22.460837    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:22.460841    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:22.463024    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:22.463683    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:22.463691    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:22.463697    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:22.463701    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:22.465467    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:22.960911    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mfzln
	I0805 16:10:22.960935    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:22.960982    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:22.960999    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:22.964197    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:22.964786    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:22.964793    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:22.964799    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:22.964802    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:22.966466    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:22.966844    4013 pod_ready.go:92] pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:22.966853    4013 pod_ready.go:81] duration metric: took 13.007198003s for pod "coredns-7db6d8ff4d-mfzln" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:22.966869    4013 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:22.966901    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-968000
	I0805 16:10:22.966906    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:22.966912    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:22.966916    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:22.968437    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:22.968826    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:22.968833    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:22.968839    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:22.968842    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:22.970427    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:22.970912    4013 pod_ready.go:92] pod "etcd-ha-968000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:22.970922    4013 pod_ready.go:81] duration metric: took 4.046965ms for pod "etcd-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:22.970928    4013 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:22.970963    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-968000-m02
	I0805 16:10:22.970968    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:22.970973    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:22.970978    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:22.972820    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:22.973377    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:10:22.973385    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:22.973391    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:22.973395    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:22.975041    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:22.975357    4013 pod_ready.go:92] pod "etcd-ha-968000-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:22.975366    4013 pod_ready.go:81] duration metric: took 4.433286ms for pod "etcd-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:22.975373    4013 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:22.975410    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-968000-m03
	I0805 16:10:22.975415    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:22.975421    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:22.975428    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:22.977033    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:22.977409    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:22.977416    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:22.977422    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:22.977425    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:22.978990    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:23.477076    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-968000-m03
	I0805 16:10:23.477102    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:23.477114    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:23.477120    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:23.480444    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:23.480920    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:23.480927    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:23.480934    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:23.480937    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:23.482684    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:23.976407    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-968000-m03
	I0805 16:10:23.976432    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:23.976443    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:23.976450    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:23.979450    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:23.979998    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:23.980005    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:23.980011    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:23.980015    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:23.981679    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:24.476784    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-968000-m03
	I0805 16:10:24.476798    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:24.476805    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:24.476814    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:24.479014    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:24.479514    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:24.479522    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:24.479528    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:24.479531    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:24.481269    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:24.481711    4013 pod_ready.go:92] pod "etcd-ha-968000-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:24.481720    4013 pod_ready.go:81] duration metric: took 1.506341693s for pod "etcd-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:24.481735    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:24.481776    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-968000
	I0805 16:10:24.481781    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:24.481787    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:24.481791    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:24.483526    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:24.483895    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:24.483903    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:24.483909    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:24.483913    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:24.485324    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:24.485707    4013 pod_ready.go:92] pod "kube-apiserver-ha-968000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:24.485716    4013 pod_ready.go:81] duration metric: took 3.976033ms for pod "kube-apiserver-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:24.485725    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:24.485755    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-968000-m02
	I0805 16:10:24.485761    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:24.485766    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:24.485771    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:24.487225    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:24.561028    4013 request.go:629] Waited for 73.447214ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:10:24.561115    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:10:24.561127    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:24.561139    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:24.561146    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:24.564386    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:24.564772    4013 pod_ready.go:92] pod "kube-apiserver-ha-968000-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:24.564785    4013 pod_ready.go:81] duration metric: took 79.054588ms for pod "kube-apiserver-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:24.564795    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:24.761641    4013 request.go:629] Waited for 196.793833ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-968000-m03
	I0805 16:10:24.761722    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-968000-m03
	I0805 16:10:24.761728    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:24.761734    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:24.761738    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:24.763753    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:24.961783    4013 request.go:629] Waited for 197.554669ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:24.961853    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:24.961860    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:24.961868    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:24.961872    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:24.964254    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:24.964712    4013 pod_ready.go:92] pod "kube-apiserver-ha-968000-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:24.964722    4013 pod_ready.go:81] duration metric: took 399.920246ms for pod "kube-apiserver-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:24.964728    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:25.161961    4013 request.go:629] Waited for 197.196834ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000
	I0805 16:10:25.162018    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000
	I0805 16:10:25.162024    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:25.162028    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:25.162032    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:25.164098    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:25.362062    4013 request.go:629] Waited for 197.590252ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:25.362143    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:25.362150    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:25.362158    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:25.362164    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:25.364469    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:25.364982    4013 pod_ready.go:92] pod "kube-controller-manager-ha-968000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:25.364995    4013 pod_ready.go:81] duration metric: took 400.260627ms for pod "kube-controller-manager-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:25.365004    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:25.561095    4013 request.go:629] Waited for 196.05214ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m02
	I0805 16:10:25.561139    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m02
	I0805 16:10:25.561147    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:25.561173    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:25.561180    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:25.563313    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:25.761969    4013 request.go:629] Waited for 198.293569ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:10:25.762009    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:10:25.762016    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:25.762027    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:25.762062    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:25.764659    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:25.765098    4013 pod_ready.go:92] pod "kube-controller-manager-ha-968000-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:25.765107    4013 pod_ready.go:81] duration metric: took 400.096353ms for pod "kube-controller-manager-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:25.765120    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:25.961382    4013 request.go:629] Waited for 196.226504ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m03
	I0805 16:10:25.961416    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m03
	I0805 16:10:25.961422    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:25.961434    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:25.961446    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:25.963534    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:26.162364    4013 request.go:629] Waited for 198.280605ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:26.162397    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:26.162402    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:26.162408    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:26.162412    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:26.164357    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:26.362197    4013 request.go:629] Waited for 94.915828ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m03
	I0805 16:10:26.362260    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m03
	I0805 16:10:26.362266    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:26.362273    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:26.362276    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:26.364350    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:26.562545    4013 request.go:629] Waited for 197.745091ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:26.562624    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:26.562630    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:26.562637    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:26.562640    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:26.565319    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:26.767236    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m03
	I0805 16:10:26.767251    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:26.767257    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:26.767262    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:26.769341    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:26.962089    4013 request.go:629] Waited for 192.24367ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:26.962162    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:26.962168    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:26.962175    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:26.962178    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:26.964212    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:27.267240    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-968000-m03
	I0805 16:10:27.267258    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:27.267266    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:27.267270    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:27.269879    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:27.362824    4013 request.go:629] Waited for 92.466824ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:27.362855    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:27.362861    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:27.362867    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:27.362873    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:27.364886    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:27.365316    4013 pod_ready.go:92] pod "kube-controller-manager-ha-968000-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:27.365326    4013 pod_ready.go:81] duration metric: took 1.600199608s for pod "kube-controller-manager-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:27.365333    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fvd5q" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:27.562545    4013 request.go:629] Waited for 197.173723ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvd5q
	I0805 16:10:27.562641    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fvd5q
	I0805 16:10:27.562650    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:27.562667    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:27.562672    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:27.564919    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:27.762505    4013 request.go:629] Waited for 197.212423ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:10:27.762538    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:10:27.762543    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:27.762549    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:27.762554    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:27.764932    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:27.765395    4013 pod_ready.go:92] pod "kube-proxy-fvd5q" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:27.765405    4013 pod_ready.go:81] duration metric: took 400.066585ms for pod "kube-proxy-fvd5q" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:27.765413    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p4xgk" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:27.962081    4013 request.go:629] Waited for 196.624809ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p4xgk
	I0805 16:10:27.962208    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-p4xgk
	I0805 16:10:27.962219    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:27.962231    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:27.962265    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:27.965643    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:28.161558    4013 request.go:629] Waited for 195.152397ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:28.161641    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:28.161650    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:28.161658    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:28.161662    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:28.164062    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:28.164477    4013 pod_ready.go:92] pod "kube-proxy-p4xgk" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:28.164486    4013 pod_ready.go:81] duration metric: took 399.068204ms for pod "kube-proxy-p4xgk" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:28.164494    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qptt6" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:28.362129    4013 request.go:629] Waited for 197.598336ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qptt6
	I0805 16:10:28.362162    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qptt6
	I0805 16:10:28.362167    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:28.362173    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:28.362177    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:28.364194    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:28.561667    4013 request.go:629] Waited for 196.999586ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m04
	I0805 16:10:28.561700    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m04
	I0805 16:10:28.561748    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:28.561756    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:28.561759    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:28.564274    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:28.564561    4013 pod_ready.go:97] node "ha-968000-m04" hosting pod "kube-proxy-qptt6" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-968000-m04" has status "Ready":"Unknown"
	I0805 16:10:28.564573    4013 pod_ready.go:81] duration metric: took 400.073458ms for pod "kube-proxy-qptt6" in "kube-system" namespace to be "Ready" ...
	E0805 16:10:28.564580    4013 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-968000-m04" hosting pod "kube-proxy-qptt6" in "kube-system" namespace is currently not "Ready" (skipping!): node "ha-968000-m04" has status "Ready":"Unknown"
	I0805 16:10:28.564585    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-v87jb" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:28.761155    4013 request.go:629] Waited for 196.536425ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v87jb
	I0805 16:10:28.761194    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v87jb
	I0805 16:10:28.761220    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:28.761235    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:28.761241    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:28.763501    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:28.962341    4013 request.go:629] Waited for 198.29849ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:28.962395    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:28.962429    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:28.962455    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:28.962470    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:28.965239    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:28.965595    4013 pod_ready.go:92] pod "kube-proxy-v87jb" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:28.965603    4013 pod_ready.go:81] duration metric: took 401.013479ms for pod "kube-proxy-v87jb" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:28.965611    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:29.161737    4013 request.go:629] Waited for 196.060247ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000
	I0805 16:10:29.161876    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000
	I0805 16:10:29.161889    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:29.161901    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:29.161907    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:29.165617    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:29.361022    4013 request.go:629] Waited for 194.748045ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:29.361106    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000
	I0805 16:10:29.361115    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:29.361123    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:29.361133    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:29.363092    4013 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:10:29.363445    4013 pod_ready.go:92] pod "kube-scheduler-ha-968000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:29.363455    4013 pod_ready.go:81] duration metric: took 397.839229ms for pod "kube-scheduler-ha-968000" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:29.363462    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:29.562518    4013 request.go:629] Waited for 199.009741ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000-m02
	I0805 16:10:29.562602    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000-m02
	I0805 16:10:29.562608    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:29.562616    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:29.562621    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:29.565612    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:29.761127    4013 request.go:629] Waited for 195.236074ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:10:29.761159    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m02
	I0805 16:10:29.761163    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:29.761169    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:29.761174    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:29.763545    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:29.764045    4013 pod_ready.go:92] pod "kube-scheduler-ha-968000-m02" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:29.764056    4013 pod_ready.go:81] duration metric: took 400.588926ms for pod "kube-scheduler-ha-968000-m02" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:29.764063    4013 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:29.961261    4013 request.go:629] Waited for 197.156425ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000-m03
	I0805 16:10:29.961356    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-968000-m03
	I0805 16:10:29.961365    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:29.961373    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:29.961379    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:29.963937    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:30.162354    4013 request.go:629] Waited for 197.925421ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:30.162411    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-968000-m03
	I0805 16:10:30.162422    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:30.162485    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:30.162494    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:30.165503    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:30.166291    4013 pod_ready.go:92] pod "kube-scheduler-ha-968000-m03" in "kube-system" namespace has status "Ready":"True"
	I0805 16:10:30.166300    4013 pod_ready.go:81] duration metric: took 402.232052ms for pod "kube-scheduler-ha-968000-m03" in "kube-system" namespace to be "Ready" ...
	I0805 16:10:30.166308    4013 pod_ready.go:38] duration metric: took 20.225431391s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:10:30.166322    4013 api_server.go:52] waiting for apiserver process to appear ...
	I0805 16:10:30.166373    4013 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:10:30.178781    4013 api_server.go:72] duration metric: took 20.408716061s to wait for apiserver process to appear ...
	I0805 16:10:30.178794    4013 api_server.go:88] waiting for apiserver healthz status ...
	I0805 16:10:30.178806    4013 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0805 16:10:30.181777    4013 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0805 16:10:30.181817    4013 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0805 16:10:30.181822    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:30.181828    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:30.181832    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:30.182461    4013 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:10:30.182514    4013 api_server.go:141] control plane version: v1.30.3
	I0805 16:10:30.182522    4013 api_server.go:131] duration metric: took 3.723541ms to wait for apiserver health ...
	I0805 16:10:30.182527    4013 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 16:10:30.361346    4013 request.go:629] Waited for 178.775767ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0805 16:10:30.361395    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0805 16:10:30.361407    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:30.361483    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:30.361495    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:30.367528    4013 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0805 16:10:30.373218    4013 system_pods.go:59] 26 kube-system pods found
	I0805 16:10:30.373231    4013 system_pods.go:61] "coredns-7db6d8ff4d-hjp5z" [e31fd97b-2727-4db3-a17c-3302c320832b] Running
	I0805 16:10:30.373242    4013 system_pods.go:61] "coredns-7db6d8ff4d-mfzln" [ea5c136e-84a6-4253-8f61-85c427b83840] Running
	I0805 16:10:30.373246    4013 system_pods.go:61] "etcd-ha-968000" [24590478-199e-4d78-8312-3d5924d6e915] Running
	I0805 16:10:30.373249    4013 system_pods.go:61] "etcd-ha-968000-m02" [cefe6f5a-3a87-4ccf-9419-0b864275c9c9] Running
	I0805 16:10:30.373253    4013 system_pods.go:61] "etcd-ha-968000-m03" [ec752887-5a12-4888-ba88-3fb5d54c6ce7] Running
	I0805 16:10:30.373255    4013 system_pods.go:61] "kindnet-5dshm" [2641d2a9-a26a-4cbe-b8ea-99ed7c7af43c] Running
	I0805 16:10:30.373258    4013 system_pods.go:61] "kindnet-cglm9" [80a5d2ca-3d9f-4347-bb68-cd6eac4e4aa8] Running
	I0805 16:10:30.373261    4013 system_pods.go:61] "kindnet-fp5ns" [bf9c4454-9491-4a21-8f0a-6c6f21919551] Running
	I0805 16:10:30.373267    4013 system_pods.go:61] "kindnet-qh6l6" [382ac149-5a4e-4fe4-aaaa-9c929c93b101] Running
	I0805 16:10:30.373270    4013 system_pods.go:61] "kube-apiserver-ha-968000" [04e9a721-eb6e-47b4-a7f0-2cad1ee201f7] Running
	I0805 16:10:30.373272    4013 system_pods.go:61] "kube-apiserver-ha-968000-m02" [0465a825-6697-4a98-bb88-18df7929a5dd] Running
	I0805 16:10:30.373275    4013 system_pods.go:61] "kube-apiserver-ha-968000-m03" [a0d3fc83-9820-463e-81bb-2abcb1b4c868] Running
	I0805 16:10:30.373278    4013 system_pods.go:61] "kube-controller-manager-ha-968000" [2078d070-21b4-4d47-a4d3-b130fa8b3aaf] Running
	I0805 16:10:30.373280    4013 system_pods.go:61] "kube-controller-manager-ha-968000-m02" [f0a1cc06-05bb-4efa-9a53-ebccba2b5f9e] Running
	I0805 16:10:30.373283    4013 system_pods.go:61] "kube-controller-manager-ha-968000-m03" [d140abba-93f2-4062-8ee8-3918ff5ae882] Running
	I0805 16:10:30.373286    4013 system_pods.go:61] "kube-proxy-fvd5q" [f2f13535-5802-4a1c-8243-48de42b79e74] Running
	I0805 16:10:30.373290    4013 system_pods.go:61] "kube-proxy-p4xgk" [aaca6036-f95c-44fb-a358-5ac881148fa4] Running
	I0805 16:10:30.373293    4013 system_pods.go:61] "kube-proxy-qptt6" [a826a636-1d05-4cca-a56d-d25a9cf41506] Running
	I0805 16:10:30.373296    4013 system_pods.go:61] "kube-proxy-v87jb" [d98f61ac-3a61-452c-8507-7258a9703c15] Running
	I0805 16:10:30.373298    4013 system_pods.go:61] "kube-scheduler-ha-968000" [20bf4b5e-71a1-4708-bb6a-34b0e44f196d] Running
	I0805 16:10:30.373301    4013 system_pods.go:61] "kube-scheduler-ha-968000-m02" [e590d5bf-9517-433b-9759-5b0f16cfe9a9] Running
	I0805 16:10:30.373303    4013 system_pods.go:61] "kube-scheduler-ha-968000-m03" [91120005-f0b0-47d5-a91c-c06b12e6da3e] Running
	I0805 16:10:30.373306    4013 system_pods.go:61] "kube-vip-ha-968000" [ac1aab33-b1d7-4b08-bde4-1bbd87c671f6] Running
	I0805 16:10:30.373308    4013 system_pods.go:61] "kube-vip-ha-968000-m02" [713fc36a-5582-464c-82d3-02905c81b753] Running
	I0805 16:10:30.373311    4013 system_pods.go:61] "kube-vip-ha-968000-m03" [d94a7e1c-9ddd-4229-b4cd-ac05384dd20a] Running
	I0805 16:10:30.373315    4013 system_pods.go:61] "storage-provisioner" [52e2952a-756d-4f65-84f5-588cb6563297] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0805 16:10:30.373320    4013 system_pods.go:74] duration metric: took 190.788685ms to wait for pod list to return data ...
	I0805 16:10:30.373327    4013 default_sa.go:34] waiting for default service account to be created ...
	I0805 16:10:30.561033    4013 request.go:629] Waited for 187.657545ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:10:30.561084    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:10:30.561123    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:30.561138    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:30.561146    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:30.564680    4013 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:10:30.564786    4013 default_sa.go:45] found service account: "default"
	I0805 16:10:30.564796    4013 default_sa.go:55] duration metric: took 191.464074ms for default service account to be created ...
	I0805 16:10:30.564801    4013 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 16:10:30.761949    4013 request.go:629] Waited for 197.098715ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0805 16:10:30.762013    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0805 16:10:30.762021    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:30.762029    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:30.762035    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:30.768776    4013 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0805 16:10:30.774173    4013 system_pods.go:86] 26 kube-system pods found
	I0805 16:10:30.774191    4013 system_pods.go:89] "coredns-7db6d8ff4d-hjp5z" [e31fd97b-2727-4db3-a17c-3302c320832b] Running
	I0805 16:10:30.774196    4013 system_pods.go:89] "coredns-7db6d8ff4d-mfzln" [ea5c136e-84a6-4253-8f61-85c427b83840] Running
	I0805 16:10:30.774200    4013 system_pods.go:89] "etcd-ha-968000" [24590478-199e-4d78-8312-3d5924d6e915] Running
	I0805 16:10:30.774203    4013 system_pods.go:89] "etcd-ha-968000-m02" [cefe6f5a-3a87-4ccf-9419-0b864275c9c9] Running
	I0805 16:10:30.774207    4013 system_pods.go:89] "etcd-ha-968000-m03" [ec752887-5a12-4888-ba88-3fb5d54c6ce7] Running
	I0805 16:10:30.774211    4013 system_pods.go:89] "kindnet-5dshm" [2641d2a9-a26a-4cbe-b8ea-99ed7c7af43c] Running
	I0805 16:10:30.774214    4013 system_pods.go:89] "kindnet-cglm9" [80a5d2ca-3d9f-4347-bb68-cd6eac4e4aa8] Running
	I0805 16:10:30.774219    4013 system_pods.go:89] "kindnet-fp5ns" [bf9c4454-9491-4a21-8f0a-6c6f21919551] Running
	I0805 16:10:30.774222    4013 system_pods.go:89] "kindnet-qh6l6" [382ac149-5a4e-4fe4-aaaa-9c929c93b101] Running
	I0805 16:10:30.774225    4013 system_pods.go:89] "kube-apiserver-ha-968000" [04e9a721-eb6e-47b4-a7f0-2cad1ee201f7] Running
	I0805 16:10:30.774229    4013 system_pods.go:89] "kube-apiserver-ha-968000-m02" [0465a825-6697-4a98-bb88-18df7929a5dd] Running
	I0805 16:10:30.774232    4013 system_pods.go:89] "kube-apiserver-ha-968000-m03" [a0d3fc83-9820-463e-81bb-2abcb1b4c868] Running
	I0805 16:10:30.774236    4013 system_pods.go:89] "kube-controller-manager-ha-968000" [2078d070-21b4-4d47-a4d3-b130fa8b3aaf] Running
	I0805 16:10:30.774240    4013 system_pods.go:89] "kube-controller-manager-ha-968000-m02" [f0a1cc06-05bb-4efa-9a53-ebccba2b5f9e] Running
	I0805 16:10:30.774243    4013 system_pods.go:89] "kube-controller-manager-ha-968000-m03" [d140abba-93f2-4062-8ee8-3918ff5ae882] Running
	I0805 16:10:30.774246    4013 system_pods.go:89] "kube-proxy-fvd5q" [f2f13535-5802-4a1c-8243-48de42b79e74] Running
	I0805 16:10:30.774250    4013 system_pods.go:89] "kube-proxy-p4xgk" [aaca6036-f95c-44fb-a358-5ac881148fa4] Running
	I0805 16:10:30.774253    4013 system_pods.go:89] "kube-proxy-qptt6" [a826a636-1d05-4cca-a56d-d25a9cf41506] Running
	I0805 16:10:30.774257    4013 system_pods.go:89] "kube-proxy-v87jb" [d98f61ac-3a61-452c-8507-7258a9703c15] Running
	I0805 16:10:30.774261    4013 system_pods.go:89] "kube-scheduler-ha-968000" [20bf4b5e-71a1-4708-bb6a-34b0e44f196d] Running
	I0805 16:10:30.774265    4013 system_pods.go:89] "kube-scheduler-ha-968000-m02" [e590d5bf-9517-433b-9759-5b0f16cfe9a9] Running
	I0805 16:10:30.774268    4013 system_pods.go:89] "kube-scheduler-ha-968000-m03" [91120005-f0b0-47d5-a91c-c06b12e6da3e] Running
	I0805 16:10:30.774271    4013 system_pods.go:89] "kube-vip-ha-968000" [ac1aab33-b1d7-4b08-bde4-1bbd87c671f6] Running
	I0805 16:10:30.774275    4013 system_pods.go:89] "kube-vip-ha-968000-m02" [713fc36a-5582-464c-82d3-02905c81b753] Running
	I0805 16:10:30.774281    4013 system_pods.go:89] "kube-vip-ha-968000-m03" [d94a7e1c-9ddd-4229-b4cd-ac05384dd20a] Running
	I0805 16:10:30.774287    4013 system_pods.go:89] "storage-provisioner" [52e2952a-756d-4f65-84f5-588cb6563297] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0805 16:10:30.774292    4013 system_pods.go:126] duration metric: took 209.48655ms to wait for k8s-apps to be running ...
	I0805 16:10:30.774299    4013 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 16:10:30.774355    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:10:30.784922    4013 system_svc.go:56] duration metric: took 10.617828ms WaitForService to wait for kubelet
	I0805 16:10:30.784940    4013 kubeadm.go:582] duration metric: took 21.014875463s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:10:30.784959    4013 node_conditions.go:102] verifying NodePressure condition ...
	I0805 16:10:30.960928    4013 request.go:629] Waited for 175.930639ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0805 16:10:30.960954    4013 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0805 16:10:30.960958    4013 round_trippers.go:469] Request Headers:
	I0805 16:10:30.960965    4013 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:10:30.960969    4013 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:10:30.963520    4013 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:10:30.964254    4013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:10:30.964263    4013 node_conditions.go:123] node cpu capacity is 2
	I0805 16:10:30.964270    4013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:10:30.964274    4013 node_conditions.go:123] node cpu capacity is 2
	I0805 16:10:30.964278    4013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:10:30.964281    4013 node_conditions.go:123] node cpu capacity is 2
	I0805 16:10:30.964284    4013 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:10:30.964287    4013 node_conditions.go:123] node cpu capacity is 2
	I0805 16:10:30.964290    4013 node_conditions.go:105] duration metric: took 179.327419ms to run NodePressure ...
	I0805 16:10:30.964299    4013 start.go:241] waiting for startup goroutines ...
	I0805 16:10:30.964314    4013 start.go:255] writing updated cluster config ...
	I0805 16:10:30.985934    4013 out.go:177] 
	I0805 16:10:31.006970    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:10:31.007089    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:10:31.028647    4013 out.go:177] * Starting "ha-968000-m04" worker node in "ha-968000" cluster
	I0805 16:10:31.070449    4013 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:10:31.070470    4013 cache.go:56] Caching tarball of preloaded images
	I0805 16:10:31.070587    4013 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:10:31.070597    4013 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:10:31.070661    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:10:31.071212    4013 start.go:360] acquireMachinesLock for ha-968000-m04: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:10:31.071274    4013 start.go:364] duration metric: took 48.958µs to acquireMachinesLock for "ha-968000-m04"
	I0805 16:10:31.071288    4013 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:10:31.071292    4013 fix.go:54] fixHost starting: m04
	I0805 16:10:31.071532    4013 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:10:31.071551    4013 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:10:31.080682    4013 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51965
	I0805 16:10:31.081033    4013 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:10:31.081390    4013 main.go:141] libmachine: Using API Version  1
	I0805 16:10:31.081404    4013 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:10:31.081602    4013 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:10:31.081699    4013 main.go:141] libmachine: (ha-968000-m04) Calling .DriverName
	I0805 16:10:31.081797    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetState
	I0805 16:10:31.081874    4013 main.go:141] libmachine: (ha-968000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:10:31.081960    4013 main.go:141] libmachine: (ha-968000-m04) DBG | hyperkit pid from json: 3587
	I0805 16:10:31.082940    4013 main.go:141] libmachine: (ha-968000-m04) DBG | hyperkit pid 3587 missing from process table
	I0805 16:10:31.082969    4013 fix.go:112] recreateIfNeeded on ha-968000-m04: state=Stopped err=<nil>
	I0805 16:10:31.082980    4013 main.go:141] libmachine: (ha-968000-m04) Calling .DriverName
	W0805 16:10:31.083071    4013 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:10:31.103629    4013 out.go:177] * Restarting existing hyperkit VM for "ha-968000-m04" ...
	I0805 16:10:31.144437    4013 main.go:141] libmachine: (ha-968000-m04) Calling .Start
	I0805 16:10:31.144560    4013 main.go:141] libmachine: (ha-968000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:10:31.144576    4013 main.go:141] libmachine: (ha-968000-m04) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/hyperkit.pid
	I0805 16:10:31.144624    4013 main.go:141] libmachine: (ha-968000-m04) DBG | Using UUID a18c3228-c5cd-4311-88be-5c31f452a5bc
	I0805 16:10:31.170211    4013 main.go:141] libmachine: (ha-968000-m04) DBG | Generated MAC 2e:80:64:4a:6a:1a
	I0805 16:10:31.170234    4013 main.go:141] libmachine: (ha-968000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000
	I0805 16:10:31.170385    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a18c3228-c5cd-4311-88be-5c31f452a5bc", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002ad770)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:10:31.170420    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a18c3228-c5cd-4311-88be-5c31f452a5bc", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0002ad770)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:10:31.170473    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "a18c3228-c5cd-4311-88be-5c31f452a5bc", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/ha-968000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machine
s/ha-968000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000"}
	I0805 16:10:31.170506    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U a18c3228-c5cd-4311-88be-5c31f452a5bc -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/ha-968000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000"
	I0805 16:10:31.170534    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:10:31.171899    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 DEBUG: hyperkit: Pid is 4076
	I0805 16:10:31.172381    4013 main.go:141] libmachine: (ha-968000-m04) DBG | Attempt 0
	I0805 16:10:31.172398    4013 main.go:141] libmachine: (ha-968000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:10:31.172450    4013 main.go:141] libmachine: (ha-968000-m04) DBG | hyperkit pid from json: 4076
	I0805 16:10:31.173609    4013 main.go:141] libmachine: (ha-968000-m04) DBG | Searching for 2e:80:64:4a:6a:1a in /var/db/dhcpd_leases ...
	I0805 16:10:31.173677    4013 main.go:141] libmachine: (ha-968000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0805 16:10:31.173696    4013 main.go:141] libmachine: (ha-968000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b2ad30}
	I0805 16:10:31.173728    4013 main.go:141] libmachine: (ha-968000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:10:31.173759    4013 main.go:141] libmachine: (ha-968000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2acfd}
	I0805 16:10:31.173793    4013 main.go:141] libmachine: (ha-968000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b15b5a}
	I0805 16:10:31.173811    4013 main.go:141] libmachine: (ha-968000-m04) DBG | Found match: 2e:80:64:4a:6a:1a
	I0805 16:10:31.173825    4013 main.go:141] libmachine: (ha-968000-m04) DBG | IP: 192.169.0.8
	I0805 16:10:31.173829    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetConfigRaw
	I0805 16:10:31.174658    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetIP
	I0805 16:10:31.174867    4013 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:10:31.175539    4013 machine.go:94] provisionDockerMachine start ...
	I0805 16:10:31.175554    4013 main.go:141] libmachine: (ha-968000-m04) Calling .DriverName
	I0805 16:10:31.175674    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:10:31.175766    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:10:31.175918    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:10:31.176065    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:10:31.176193    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:10:31.176341    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:10:31.176494    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0805 16:10:31.176502    4013 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 16:10:31.179979    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:10:31.189022    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:10:31.190141    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:10:31.190167    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:10:31.190183    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:10:31.190196    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:10:31.578293    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:10:31.578309    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:10:31.693368    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:10:31.693393    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:10:31.693424    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:10:31.693448    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:10:31.694196    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:10:31.694209    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:31 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:10:37.416235    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:37 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:10:37.416360    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:37 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:10:37.416373    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:37 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:10:37.440251    4013 main.go:141] libmachine: (ha-968000-m04) DBG | 2024/08/05 16:10:37 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:11:06.247173    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 16:11:06.247187    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetMachineName
	I0805 16:11:06.247309    4013 buildroot.go:166] provisioning hostname "ha-968000-m04"
	I0805 16:11:06.247318    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetMachineName
	I0805 16:11:06.247423    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:06.247508    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:06.247594    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.247671    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.247772    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:06.247899    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:11:06.248060    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0805 16:11:06.248068    4013 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-968000-m04 && echo "ha-968000-m04" | sudo tee /etc/hostname
	I0805 16:11:06.317371    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-968000-m04
	
	I0805 16:11:06.317388    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:06.317526    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:06.317622    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.317715    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.317808    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:06.317937    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:11:06.318101    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0805 16:11:06.318113    4013 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-968000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-968000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-968000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:11:06.382855    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:11:06.382871    4013 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:11:06.382888    4013 buildroot.go:174] setting up certificates
	I0805 16:11:06.382895    4013 provision.go:84] configureAuth start
	I0805 16:11:06.382903    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetMachineName
	I0805 16:11:06.383053    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetIP
	I0805 16:11:06.383164    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:06.383233    4013 provision.go:143] copyHostCerts
	I0805 16:11:06.383260    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:11:06.383324    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:11:06.383330    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:11:06.383467    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:11:06.383688    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:11:06.383735    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:11:06.383741    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:11:06.383821    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:11:06.383965    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:11:06.384005    4013 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:11:06.384009    4013 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:11:06.384091    4013 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:11:06.384243    4013 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.ha-968000-m04 san=[127.0.0.1 192.169.0.8 ha-968000-m04 localhost minikube]
	I0805 16:11:06.441247    4013 provision.go:177] copyRemoteCerts
	I0805 16:11:06.441333    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:11:06.441360    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:06.441582    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:06.441714    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.441797    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:06.441875    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/id_rsa Username:docker}
	I0805 16:11:06.478976    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:11:06.479045    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:11:06.498620    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:11:06.498698    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0805 16:11:06.519415    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:11:06.519486    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:11:06.539397    4013 provision.go:87] duration metric: took 156.493754ms to configureAuth
	I0805 16:11:06.539413    4013 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:11:06.539605    4013 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:11:06.539618    4013 main.go:141] libmachine: (ha-968000-m04) Calling .DriverName
	I0805 16:11:06.539752    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:06.539832    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:06.539911    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.540002    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.540090    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:06.540207    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:11:06.540372    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0805 16:11:06.540380    4013 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:11:06.599043    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:11:06.599055    4013 buildroot.go:70] root file system type: tmpfs
	I0805 16:11:06.599124    4013 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:11:06.599137    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:06.599263    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:06.599347    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.599450    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.599542    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:06.599675    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:11:06.599808    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0805 16:11:06.599855    4013 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:11:06.668751    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	Environment=NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:11:06.668771    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:06.668901    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:06.669001    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.669105    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:06.669186    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:06.669346    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:11:06.669490    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0805 16:11:06.669502    4013 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:11:08.250301    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:11:08.250316    4013 machine.go:97] duration metric: took 37.074755145s to provisionDockerMachine
	I0805 16:11:08.250324    4013 start.go:293] postStartSetup for "ha-968000-m04" (driver="hyperkit")
	I0805 16:11:08.250332    4013 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:11:08.250344    4013 main.go:141] libmachine: (ha-968000-m04) Calling .DriverName
	I0805 16:11:08.250520    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:11:08.250533    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:08.250626    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:08.250720    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:08.250813    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:08.250900    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/id_rsa Username:docker}
	I0805 16:11:08.286575    4013 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:11:08.289665    4013 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:11:08.289683    4013 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:11:08.289795    4013 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:11:08.289976    4013 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:11:08.289983    4013 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:11:08.290190    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:11:08.297566    4013 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:11:08.317678    4013 start.go:296] duration metric: took 67.345639ms for postStartSetup
	I0805 16:11:08.317700    4013 main.go:141] libmachine: (ha-968000-m04) Calling .DriverName
	I0805 16:11:08.317862    4013 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0805 16:11:08.317884    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:08.317967    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:08.318053    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:08.318144    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:08.318232    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/id_rsa Username:docker}
	I0805 16:11:08.353636    4013 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0805 16:11:08.353694    4013 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0805 16:11:08.385358    4013 fix.go:56] duration metric: took 37.314050272s for fixHost
	I0805 16:11:08.385384    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:08.385514    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:08.385605    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:08.385692    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:08.385761    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:08.385881    4013 main.go:141] libmachine: Using SSH client type: native
	I0805 16:11:08.386024    4013 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x71200c0] 0x7122e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0805 16:11:08.386032    4013 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 16:11:08.446465    4013 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722899468.587788631
	
	I0805 16:11:08.446479    4013 fix.go:216] guest clock: 1722899468.587788631
	I0805 16:11:08.446484    4013 fix.go:229] Guest: 2024-08-05 16:11:08.587788631 -0700 PDT Remote: 2024-08-05 16:11:08.385373 -0700 PDT m=+152.742754663 (delta=202.415631ms)
	I0805 16:11:08.446495    4013 fix.go:200] guest clock delta is within tolerance: 202.415631ms
	I0805 16:11:08.446499    4013 start.go:83] releasing machines lock for "ha-968000-m04", held for 37.375207026s
	I0805 16:11:08.446517    4013 main.go:141] libmachine: (ha-968000-m04) Calling .DriverName
	I0805 16:11:08.446647    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetIP
	I0805 16:11:08.469183    4013 out.go:177] * Found network options:
	I0805 16:11:08.489020    4013 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6,192.169.0.7
	W0805 16:11:08.509956    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	W0805 16:11:08.509981    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	W0805 16:11:08.509995    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:11:08.510012    4013 main.go:141] libmachine: (ha-968000-m04) Calling .DriverName
	I0805 16:11:08.510694    4013 main.go:141] libmachine: (ha-968000-m04) Calling .DriverName
	I0805 16:11:08.510902    4013 main.go:141] libmachine: (ha-968000-m04) Calling .DriverName
	I0805 16:11:08.510988    4013 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:11:08.511021    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	W0805 16:11:08.511083    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	W0805 16:11:08.511098    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	W0805 16:11:08.511109    4013 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:11:08.511171    4013 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 16:11:08.511183    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:11:08.511199    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:08.511320    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:11:08.511356    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:08.511475    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:08.511503    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:11:08.511579    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/id_rsa Username:docker}
	I0805 16:11:08.511613    4013 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:11:08.511730    4013 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/id_rsa Username:docker}
	W0805 16:11:08.544454    4013 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:11:08.544519    4013 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:11:08.559248    4013 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:11:08.559269    4013 start.go:495] detecting cgroup driver to use...
	I0805 16:11:08.559342    4013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:11:08.597200    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:11:08.605403    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:11:08.613387    4013 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:11:08.613447    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:11:08.621571    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:11:08.629943    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:11:08.638060    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:11:08.646402    4013 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:11:08.654807    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:11:08.662991    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:11:08.671582    4013 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:11:08.680942    4013 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:11:08.688339    4013 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:11:08.695737    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:11:08.798441    4013 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:11:08.816137    4013 start.go:495] detecting cgroup driver to use...
	I0805 16:11:08.816215    4013 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:11:08.835716    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:11:08.847518    4013 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:11:08.867990    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:11:08.879695    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:11:08.890752    4013 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:11:08.914456    4013 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:11:08.925541    4013 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:11:08.941237    4013 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:11:08.944245    4013 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:11:08.952235    4013 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:11:08.965768    4013 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:11:09.067675    4013 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:11:09.170165    4013 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:11:09.170197    4013 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:11:09.184139    4013 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:11:09.281548    4013 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:12:10.328097    4013 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.046493334s)
	I0805 16:12:10.328204    4013 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0805 16:12:10.365222    4013 out.go:177] 
	W0805 16:12:10.386312    4013 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 05 23:11:06 ha-968000-m04 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:11:06 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:06.389189042Z" level=info msg="Starting up"
	Aug 05 23:11:06 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:06.389663926Z" level=info msg="containerd not running, starting managed containerd"
	Aug 05 23:11:06 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:06.390143336Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=518
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.408369770Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423348772Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423404929Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423454269Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423464665Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423632943Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423651369Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423774064Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423808885Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423821728Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423829007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.423935968Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.424118672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.425786619Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.425825910Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.425936027Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.425969728Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.426078806Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.426121396Z" level=info msg="metadata content store policy set" policy=shared
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.427587891Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.427669563Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.427705862Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.427719084Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.427779644Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.427908991Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428136864Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428235911Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428270099Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428282071Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428290976Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428299125Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428313845Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428325716Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428339937Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428355366Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428366031Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428374178Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428386784Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428406973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428418331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428429739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428438142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428446212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428453990Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428461755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428469955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428479423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428486756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428506619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428545500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428559198Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428573033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428581795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428589599Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428635221Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428670612Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428680617Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428689626Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428696156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428800505Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.428839684Z" level=info msg="NRI interface is disabled by configuration."
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.429026394Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.429145595Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.429201340Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 05 23:11:06 ha-968000-m04 dockerd[518]: time="2024-08-05T23:11:06.429234250Z" level=info msg="containerd successfully booted in 0.021734s"
	Aug 05 23:11:07 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:07.407781552Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 05 23:11:07 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:07.418738721Z" level=info msg="Loading containers: start."
	Aug 05 23:11:07 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:07.516865232Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 05 23:11:07 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:07.582390999Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 05 23:11:08 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:08.356499605Z" level=info msg="Loading containers: done."
	Aug 05 23:11:08 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:08.366049745Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 05 23:11:08 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:08.366234171Z" level=info msg="Daemon has completed initialization"
	Aug 05 23:11:08 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:08.390065153Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 05 23:11:08 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:08.390220880Z" level=info msg="API listen on [::]:2376"
	Aug 05 23:11:08 ha-968000-m04 systemd[1]: Started Docker Application Container Engine.
	Aug 05 23:11:09 ha-968000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Aug 05 23:11:09 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:09.434256146Z" level=info msg="Processing signal 'terminated'"
	Aug 05 23:11:09 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:09.435568971Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 05 23:11:09 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:09.435927759Z" level=info msg="Daemon shutdown complete"
	Aug 05 23:11:09 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:09.436029566Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 05 23:11:09 ha-968000-m04 dockerd[512]: time="2024-08-05T23:11:09.436215589Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 05 23:11:10 ha-968000-m04 systemd[1]: docker.service: Deactivated successfully.
	Aug 05 23:11:10 ha-968000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Aug 05 23:11:10 ha-968000-m04 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:11:10 ha-968000-m04 dockerd[1111]: time="2024-08-05T23:11:10.480077702Z" level=info msg="Starting up"
	Aug 05 23:12:10 ha-968000-m04 dockerd[1111]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 05 23:12:10 ha-968000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 05 23:12:10 ha-968000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 05 23:12:10 ha-968000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0805 16:12:10.386388    4013 out.go:239] * 
	W0805 16:12:10.387046    4013 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:12:10.449396    4013 out.go:177] 
	
	
	==> Docker <==
	Aug 05 23:09:56 ha-968000 dockerd[1146]: time="2024-08-05T23:09:56.374377051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:09:57 ha-968000 dockerd[1146]: time="2024-08-05T23:09:57.374383643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:09:57 ha-968000 dockerd[1146]: time="2024-08-05T23:09:57.374505237Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:09:57 ha-968000 dockerd[1146]: time="2024-08-05T23:09:57.374519049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:09:57 ha-968000 dockerd[1146]: time="2024-08-05T23:09:57.374719774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:09:59 ha-968000 dockerd[1146]: time="2024-08-05T23:09:59.344050167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:09:59 ha-968000 dockerd[1146]: time="2024-08-05T23:09:59.344118579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:09:59 ha-968000 dockerd[1146]: time="2024-08-05T23:09:59.344128623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:09:59 ha-968000 dockerd[1146]: time="2024-08-05T23:09:59.344477096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:10:00 ha-968000 dockerd[1146]: time="2024-08-05T23:10:00.366625069Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:10:00 ha-968000 dockerd[1146]: time="2024-08-05T23:10:00.366693392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:10:00 ha-968000 dockerd[1146]: time="2024-08-05T23:10:00.366706812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:10:00 ha-968000 dockerd[1146]: time="2024-08-05T23:10:00.366787584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:10:22 ha-968000 dockerd[1146]: time="2024-08-05T23:10:22.371842451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:10:22 ha-968000 dockerd[1146]: time="2024-08-05T23:10:22.371961703Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:10:22 ha-968000 dockerd[1146]: time="2024-08-05T23:10:22.371975627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:10:22 ha-968000 dockerd[1146]: time="2024-08-05T23:10:22.372138790Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:10:26 ha-968000 dockerd[1140]: time="2024-08-05T23:10:26.510842611Z" level=info msg="ignoring event" container=cfccdb420519d323e32884587cbb2325493555960556f383b6b5243f23bf5672 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 05 23:10:26 ha-968000 dockerd[1146]: time="2024-08-05T23:10:26.511299602Z" level=info msg="shim disconnected" id=cfccdb420519d323e32884587cbb2325493555960556f383b6b5243f23bf5672 namespace=moby
	Aug 05 23:10:26 ha-968000 dockerd[1146]: time="2024-08-05T23:10:26.511337640Z" level=warning msg="cleaning up after shim disconnected" id=cfccdb420519d323e32884587cbb2325493555960556f383b6b5243f23bf5672 namespace=moby
	Aug 05 23:10:26 ha-968000 dockerd[1146]: time="2024-08-05T23:10:26.511345722Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 05 23:11:48 ha-968000 dockerd[1146]: time="2024-08-05T23:11:48.356819227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:11:48 ha-968000 dockerd[1146]: time="2024-08-05T23:11:48.357279209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:11:48 ha-968000 dockerd[1146]: time="2024-08-05T23:11:48.357395319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:11:48 ha-968000 dockerd[1146]: time="2024-08-05T23:11:48.357615482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	355fa38aecae1       6e38f40d628db                                                                                         36 seconds ago      Running             storage-provisioner       2                   1dbcc850389f8       storage-provisioner
	577258077df9f       cbb01a7bd410d                                                                                         2 minutes ago       Running             coredns                   1                   391b901a0529c       coredns-7db6d8ff4d-mfzln
	63f8a4c2092da       cbb01a7bd410d                                                                                         2 minutes ago       Running             coredns                   1                   c850e00017450       coredns-7db6d8ff4d-hjp5z
	0193799bafd1a       917d7814b9b5b                                                                                         2 minutes ago       Running             kindnet-cni               1                   9dba72250058d       kindnet-qh6l6
	d72783d2d1ffb       8c811b4aec35f                                                                                         2 minutes ago       Running             busybox                   1                   32be004c80f6e       busybox-fc5497c4f-pxn97
	3a4ca38aa00af       55bb025d2cfa5                                                                                         2 minutes ago       Running             kube-proxy                1                   588ec8f41833a       kube-proxy-v87jb
	cfccdb420519d       6e38f40d628db                                                                                         2 minutes ago       Exited              storage-provisioner       1                   1dbcc850389f8       storage-provisioner
	5279a75fe7753       3861cfcd7c04c                                                                                         3 minutes ago       Running             etcd                      1                   5b34813274f1c       etcd-ha-968000
	513af177e332b       38af8ddebf499                                                                                         3 minutes ago       Running             kube-vip                  0                   ee4d5a2e10c9e       kube-vip-ha-968000
	b60d19a548167       1f6d574d502f3                                                                                         3 minutes ago       Running             kube-apiserver            1                   cf530a36471fd       kube-apiserver-ha-968000
	24b87a0c98dcc       76932a3b37d7e                                                                                         3 minutes ago       Running             kube-controller-manager   1                   9bb601d425aab       kube-controller-manager-ha-968000
	d830712616b7f       3edc18e7b7672                                                                                         3 minutes ago       Running             kube-scheduler            1                   8f8294dee2372       kube-scheduler-ha-968000
	cb7475c28d1f7       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   6 minutes ago       Exited              busybox                   0                   9732a9146dd0b       busybox-fc5497c4f-pxn97
	718ace635ea06       cbb01a7bd410d                                                                                         8 minutes ago       Exited              coredns                   0                   500832bd7de13       coredns-7db6d8ff4d-hjp5z
	08f1d5be6bd28       cbb01a7bd410d                                                                                         8 minutes ago       Exited              coredns                   0                   9fe7dedb16964       coredns-7db6d8ff4d-mfzln
	0eff729c401d3       kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3              8 minutes ago       Exited              kindnet-cni               0                   0675dd00ddb4e       kindnet-qh6l6
	236ffa329c7b4       55bb025d2cfa5                                                                                         9 minutes ago       Exited              kube-proxy                0                   20695b590fecf       kube-proxy-v87jb
	7aac4c03a731c       1f6d574d502f3                                                                                         9 minutes ago       Exited              kube-apiserver            0                   2cfee92cb7572       kube-apiserver-ha-968000
	66678698a7a8c       3edc18e7b7672                                                                                         9 minutes ago       Exited              kube-scheduler            0                   e8d1b1861c6fd       kube-scheduler-ha-968000
	17f0dc9ba8def       3861cfcd7c04c                                                                                         9 minutes ago       Exited              etcd                      0                   77ae5c7a9a48a       etcd-ha-968000
	794441de3f195       76932a3b37d7e                                                                                         9 minutes ago       Exited              kube-controller-manager   0                   bd03fad51648f       kube-controller-manager-ha-968000
	
	
	==> coredns [08f1d5be6bd2] <==
	[INFO] 10.244.2.2:43657 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000409083s
	[INFO] 10.244.1.2:55696 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00015406s
	[INFO] 10.244.1.2:41053 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.009559855s
	[INFO] 10.244.1.2:39691 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006956s
	[INFO] 10.244.1.2:59893 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008504s
	[INFO] 10.244.0.4:33214 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000082987s
	[INFO] 10.244.0.4:53796 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000097087s
	[INFO] 10.244.0.4:47821 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082377s
	[INFO] 10.244.0.4:55897 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000029356s
	[INFO] 10.244.2.2:49761 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000081825s
	[INFO] 10.244.2.2:58164 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106492s
	[INFO] 10.244.1.2:55164 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000087227s
	[INFO] 10.244.1.2:47300 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000047931s
	[INFO] 10.244.0.4:37289 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080578s
	[INFO] 10.244.2.2:42229 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100216s
	[INFO] 10.244.2.2:56584 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000066152s
	[INFO] 10.244.2.2:33160 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000064701s
	[INFO] 10.244.2.2:52725 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010518s
	[INFO] 10.244.0.4:36176 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000096237s
	[INFO] 10.244.0.4:33211 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000082639s
	[INFO] 10.244.2.2:38034 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000097661s
	[INFO] 10.244.2.2:57513 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108796s
	[INFO] 10.244.2.2:33013 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000036818s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [577258077df9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:43827 - 29901 "HINFO IN 4580923541750251985.7631092243009977165. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011091367s
	
	
	==> coredns [63f8a4c2092d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:53228 - 19573 "HINFO IN 3833116979176979481.4354200100168845612. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011686072s
	
	
	==> coredns [718ace635ea0] <==
	[INFO] 10.244.1.2:52400 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000099778s
	[INFO] 10.244.0.4:35456 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000059225s
	[INFO] 10.244.0.4:34314 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000107945s
	[INFO] 10.244.0.4:54779 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000106466s
	[INFO] 10.244.0.4:58919 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.00067383s
	[INFO] 10.244.2.2:54419 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000090016s
	[INFO] 10.244.2.2:54439 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000073949s
	[INFO] 10.244.2.2:46501 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000041344s
	[INFO] 10.244.2.2:41755 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000069101s
	[INFO] 10.244.2.2:51313 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000132647s
	[INFO] 10.244.2.2:37540 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000073728s
	[INFO] 10.244.1.2:59563 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000125503s
	[INFO] 10.244.1.2:47682 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000070898s
	[INFO] 10.244.0.4:41592 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088839s
	[INFO] 10.244.0.4:54512 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000059642s
	[INFO] 10.244.0.4:57130 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000080875s
	[INFO] 10.244.1.2:51262 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000104244s
	[INFO] 10.244.1.2:34748 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000125796s
	[INFO] 10.244.1.2:40451 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000119057s
	[INFO] 10.244.1.2:37514 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000090659s
	[INFO] 10.244.0.4:41185 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009175s
	[INFO] 10.244.0.4:34639 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000100906s
	[INFO] 10.244.2.2:55855 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000088544s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-968000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-968000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=ha-968000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T16_03_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:03:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-968000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:12:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:09:32 +0000   Mon, 05 Aug 2024 23:03:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:09:32 +0000   Mon, 05 Aug 2024 23:03:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:09:32 +0000   Mon, 05 Aug 2024 23:03:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:09:32 +0000   Mon, 05 Aug 2024 23:03:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-968000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 3d395b8f5a5645a29e265d49d4358791
	  System UUID:                a9f34e4f-0000-0000-b87b-350754bafb6d
	  Boot ID:                    d8c06632-4a4d-43d2-a7c9-eaf87fc4ce97
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pxn97              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  kube-system                 coredns-7db6d8ff4d-hjp5z             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m
	  kube-system                 coredns-7db6d8ff4d-mfzln             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m1s
	  kube-system                 etcd-ha-968000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m14s
	  kube-system                 kindnet-qh6l6                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m1s
	  kube-system                 kube-apiserver-ha-968000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m14s
	  kube-system                 kube-controller-manager-ha-968000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m14s
	  kube-system                 kube-proxy-v87jb                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m1s
	  kube-system                 kube-scheduler-ha-968000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m14s
	  kube-system                 kube-vip-ha-968000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m42s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m                     kube-proxy       
	  Normal  Starting                 2m28s                  kube-proxy       
	  Normal  Starting                 9m14s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m14s                  kubelet          Node ha-968000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m14s                  kubelet          Node ha-968000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m14s                  kubelet          Node ha-968000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m2s                   node-controller  Node ha-968000 event: Registered Node ha-968000 in Controller
	  Normal  NodeReady                8m42s                  kubelet          Node ha-968000 status is now: NodeReady
	  Normal  RegisteredNode           7m43s                  node-controller  Node ha-968000 event: Registered Node ha-968000 in Controller
	  Normal  RegisteredNode           6m26s                  node-controller  Node ha-968000 event: Registered Node ha-968000 in Controller
	  Normal  RegisteredNode           4m21s                  node-controller  Node ha-968000 event: Registered Node ha-968000 in Controller
	  Normal  NodeHasSufficientMemory  3m30s (x8 over 3m30s)  kubelet          Node ha-968000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 3m30s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    3m30s (x8 over 3m30s)  kubelet          Node ha-968000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m30s (x7 over 3m30s)  kubelet          Node ha-968000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m52s                  node-controller  Node ha-968000 event: Registered Node ha-968000 in Controller
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-968000 event: Registered Node ha-968000 in Controller
	  Normal  RegisteredNode           117s                   node-controller  Node ha-968000 event: Registered Node ha-968000 in Controller
	
	
	Name:               ha-968000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-968000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=ha-968000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T16_04_26_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:04:24 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-968000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:12:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:09:27 +0000   Mon, 05 Aug 2024 23:04:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:09:27 +0000   Mon, 05 Aug 2024 23:04:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:09:27 +0000   Mon, 05 Aug 2024 23:04:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:09:27 +0000   Mon, 05 Aug 2024 23:09:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-968000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 1d4b057c6b4e48f692755b6cf841ad9c
	  System UUID:                fe2b4f71-0000-0000-b597-390ca402ab71
	  Boot ID:                    7c73bd0f-a9d0-4153-aeb2-c06b5b51ba84
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-k62jp                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  kube-system                 etcd-ha-968000-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m58s
	  kube-system                 kindnet-fp5ns                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m
	  kube-system                 kube-apiserver-ha-968000-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m59s
	  kube-system                 kube-controller-manager-ha-968000-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m59s
	  kube-system                 kube-proxy-fvd5q                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m
	  kube-system                 kube-scheduler-ha-968000-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m59s
	  kube-system                 kube-vip-ha-968000-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m55s                  kube-proxy       
	  Normal   Starting                 4m34s                  kube-proxy       
	  Normal   Starting                 7m56s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  8m                     kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8m (x8 over 8m)        kubelet          Node ha-968000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m (x8 over 8m)        kubelet          Node ha-968000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m (x7 over 8m)        kubelet          Node ha-968000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m57s                  node-controller  Node ha-968000-m02 event: Registered Node ha-968000-m02 in Controller
	  Normal   RegisteredNode           7m43s                  node-controller  Node ha-968000-m02 event: Registered Node ha-968000-m02 in Controller
	  Normal   RegisteredNode           6m26s                  node-controller  Node ha-968000-m02 event: Registered Node ha-968000-m02 in Controller
	  Normal   Starting                 4m38s                  kubelet          Starting kubelet.
	  Warning  Rebooted                 4m38s                  kubelet          Node ha-968000-m02 has been rebooted, boot id: 7b95b6e8-f951-4164-8d86-82386ad49202
	  Normal   NodeAllocatableEnforced  4m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  4m38s (x2 over 4m38s)  kubelet          Node ha-968000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m38s (x2 over 4m38s)  kubelet          Node ha-968000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m38s (x2 over 4m38s)  kubelet          Node ha-968000-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m21s                  node-controller  Node ha-968000-m02 event: Registered Node ha-968000-m02 in Controller
	  Normal   Starting                 3m11s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  3m11s (x8 over 3m11s)  kubelet          Node ha-968000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m11s (x8 over 3m11s)  kubelet          Node ha-968000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m11s (x7 over 3m11s)  kubelet          Node ha-968000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  3m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           2m52s                  node-controller  Node ha-968000-m02 event: Registered Node ha-968000-m02 in Controller
	  Normal   RegisteredNode           2m49s                  node-controller  Node ha-968000-m02 event: Registered Node ha-968000-m02 in Controller
	  Normal   RegisteredNode           117s                   node-controller  Node ha-968000-m02 event: Registered Node ha-968000-m02 in Controller
	
	
	Name:               ha-968000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-968000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=ha-968000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T16_06_39_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:06:38 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-968000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:07:59 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 05 Aug 2024 23:07:09 +0000   Mon, 05 Aug 2024 23:10:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 05 Aug 2024 23:07:09 +0000   Mon, 05 Aug 2024 23:10:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 05 Aug 2024 23:07:09 +0000   Mon, 05 Aug 2024 23:10:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 05 Aug 2024 23:07:09 +0000   Mon, 05 Aug 2024 23:10:13 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-968000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 387692c0920442adbb5b3caabbb94471
	  System UUID:                a18c4311-0000-0000-88be-5c31f452a5bc
	  Boot ID:                    7f2467e2-e07f-4b0a-8fd3-3fe64bcdd2ab
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-5dshm       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m46s
	  kube-system                 kube-proxy-qptt6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m38s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    5m46s (x2 over 5m46s)  kubelet          Node ha-968000-m04 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           5m46s                  node-controller  Node ha-968000-m04 event: Registered Node ha-968000-m04 in Controller
	  Normal  NodeAllocatableEnforced  5m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     5m46s (x2 over 5m46s)  kubelet          Node ha-968000-m04 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  5m46s (x2 over 5m46s)  kubelet          Node ha-968000-m04 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           5m43s                  node-controller  Node ha-968000-m04 event: Registered Node ha-968000-m04 in Controller
	  Normal  RegisteredNode           5m42s                  node-controller  Node ha-968000-m04 event: Registered Node ha-968000-m04 in Controller
	  Normal  NodeReady                5m23s                  kubelet          Node ha-968000-m04 status is now: NodeReady
	  Normal  RegisteredNode           4m21s                  node-controller  Node ha-968000-m04 event: Registered Node ha-968000-m04 in Controller
	  Normal  RegisteredNode           2m52s                  node-controller  Node ha-968000-m04 event: Registered Node ha-968000-m04 in Controller
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-968000-m04 event: Registered Node ha-968000-m04 in Controller
	  Normal  NodeNotReady             2m11s                  node-controller  Node ha-968000-m04 status is now: NodeNotReady
	  Normal  RegisteredNode           117s                   node-controller  Node ha-968000-m04 event: Registered Node ha-968000-m04 in Controller
	
	
	==> dmesg <==
	[  +0.035875] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.008042] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.687990] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007066] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.637880] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +1.424408] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +1.527207] systemd-fstab-generator[472]: Ignoring "noauto" option for root device
	[  +0.101400] systemd-fstab-generator[484]: Ignoring "noauto" option for root device
	[  +2.009836] systemd-fstab-generator[1065]: Ignoring "noauto" option for root device
	[  +0.058762] kauditd_printk_skb: 79 callbacks suppressed
	[  +0.193065] systemd-fstab-generator[1105]: Ignoring "noauto" option for root device
	[  +0.103523] systemd-fstab-generator[1117]: Ignoring "noauto" option for root device
	[  +0.111824] systemd-fstab-generator[1131]: Ignoring "noauto" option for root device
	[  +2.468462] systemd-fstab-generator[1353]: Ignoring "noauto" option for root device
	[  +0.106476] systemd-fstab-generator[1365]: Ignoring "noauto" option for root device
	[  +0.116705] systemd-fstab-generator[1377]: Ignoring "noauto" option for root device
	[  +0.119469] systemd-fstab-generator[1392]: Ignoring "noauto" option for root device
	[  +0.468214] systemd-fstab-generator[1552]: Ignoring "noauto" option for root device
	[Aug 5 23:09] kauditd_printk_skb: 234 callbacks suppressed
	[ +41.984025] kauditd_printk_skb: 40 callbacks suppressed
	[ +13.597500] kauditd_printk_skb: 20 callbacks suppressed
	[Aug 5 23:10] kauditd_printk_skb: 45 callbacks suppressed
	
	
	==> etcd [17f0dc9ba8de] <==
	2024/08/05 23:08:27 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-05T23:08:27.902192Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.809078875s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/\" range_end:\"/registry/persistentvolumeclaims0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-08-05T23:08:27.902223Z","caller":"traceutil/trace.go:171","msg":"trace[1922081133] range","detail":"{range_begin:/registry/persistentvolumeclaims/; range_end:/registry/persistentvolumeclaims0; }","duration":"1.809122007s","start":"2024-08-05T23:08:26.093097Z","end":"2024-08-05T23:08:27.902219Z","steps":["trace[1922081133] 'agreement among raft nodes before linearized reading'  (duration: 1.80908929s)"],"step_count":1}
	{"level":"warn","ts":"2024-08-05T23:08:27.902238Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-08-05T23:08:26.093092Z","time spent":"1.809141726s","remote":"127.0.0.1:33368","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":0,"response size":0,"request content":"key:\"/registry/persistentvolumeclaims/\" range_end:\"/registry/persistentvolumeclaims0\" count_only:true "}
	2024/08/05 23:08:27 WARNING: [core] [Server #6] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-08-05T23:08:27.938966Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T23:08:27.939045Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-05T23:08:27.939085Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-08-05T23:08:27.939508Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"c5d16a8b28740de6"}
	{"level":"info","ts":"2024-08-05T23:08:27.939521Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c5d16a8b28740de6"}
	{"level":"info","ts":"2024-08-05T23:08:27.939534Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c5d16a8b28740de6"}
	{"level":"info","ts":"2024-08-05T23:08:27.939586Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c5d16a8b28740de6"}
	{"level":"info","ts":"2024-08-05T23:08:27.939611Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c5d16a8b28740de6"}
	{"level":"info","ts":"2024-08-05T23:08:27.939649Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"c5d16a8b28740de6"}
	{"level":"info","ts":"2024-08-05T23:08:27.93966Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"c5d16a8b28740de6"}
	{"level":"info","ts":"2024-08-05T23:08:27.939664Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"3cf0731ec44cd9cd"}
	{"level":"info","ts":"2024-08-05T23:08:27.939669Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3cf0731ec44cd9cd"}
	{"level":"info","ts":"2024-08-05T23:08:27.939682Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3cf0731ec44cd9cd"}
	{"level":"info","ts":"2024-08-05T23:08:27.939945Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3cf0731ec44cd9cd"}
	{"level":"info","ts":"2024-08-05T23:08:27.93999Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3cf0731ec44cd9cd"}
	{"level":"info","ts":"2024-08-05T23:08:27.940013Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3cf0731ec44cd9cd"}
	{"level":"info","ts":"2024-08-05T23:08:27.94002Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"3cf0731ec44cd9cd"}
	{"level":"info","ts":"2024-08-05T23:08:27.943994Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-05T23:08:27.944194Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-08-05T23:08:27.944204Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-968000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> etcd [5279a75fe775] <==
	{"level":"info","ts":"2024-08-05T23:10:11.946695Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3cf0731ec44cd9cd"}
	{"level":"info","ts":"2024-08-05T23:10:11.956038Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"3cf0731ec44cd9cd","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-08-05T23:10:11.956182Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3cf0731ec44cd9cd"}
	{"level":"info","ts":"2024-08-05T23:10:12.000882Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"3cf0731ec44cd9cd","stream-type":"stream Message"}
	{"level":"info","ts":"2024-08-05T23:10:12.001279Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3cf0731ec44cd9cd"}
	{"level":"warn","ts":"2024-08-05T23:12:18.861856Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.169.0.7:34074","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-08-05T23:12:18.868861Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(13314548521573537860 14254291441516023270)"}
	{"level":"info","ts":"2024-08-05T23:12:18.869566Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","removed-remote-peer-id":"3cf0731ec44cd9cd","removed-remote-peer-urls":["https://192.169.0.7:2380"]}
	{"level":"info","ts":"2024-08-05T23:12:18.869616Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"3cf0731ec44cd9cd"}
	{"level":"warn","ts":"2024-08-05T23:12:18.869852Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3cf0731ec44cd9cd"}
	{"level":"info","ts":"2024-08-05T23:12:18.869892Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"3cf0731ec44cd9cd"}
	{"level":"warn","ts":"2024-08-05T23:12:18.870083Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3cf0731ec44cd9cd"}
	{"level":"info","ts":"2024-08-05T23:12:18.870121Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"3cf0731ec44cd9cd"}
	{"level":"warn","ts":"2024-08-05T23:12:18.870139Z","caller":"etcdserver/server.go:980","msg":"rejected Raft message from removed member","local-member-id":"b8c6c7563d17d844","removed-member-id":"3cf0731ec44cd9cd"}
	{"level":"warn","ts":"2024-08-05T23:12:18.870154Z","caller":"rafthttp/peer.go:180","msg":"failed to process Raft message","error":"cannot process message from removed member"}
	{"level":"info","ts":"2024-08-05T23:12:18.870521Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3cf0731ec44cd9cd"}
	{"level":"warn","ts":"2024-08-05T23:12:18.87083Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3cf0731ec44cd9cd","error":"context canceled"}
	{"level":"warn","ts":"2024-08-05T23:12:18.870868Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"3cf0731ec44cd9cd","error":"failed to read 3cf0731ec44cd9cd on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-08-05T23:12:18.870885Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3cf0731ec44cd9cd"}
	{"level":"warn","ts":"2024-08-05T23:12:18.87124Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3cf0731ec44cd9cd","error":"http: read on closed response body"}
	{"level":"info","ts":"2024-08-05T23:12:18.871291Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"3cf0731ec44cd9cd"}
	{"level":"info","ts":"2024-08-05T23:12:18.871303Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"3cf0731ec44cd9cd"}
	{"level":"info","ts":"2024-08-05T23:12:18.871311Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"b8c6c7563d17d844","removed-remote-peer-id":"3cf0731ec44cd9cd"}
	{"level":"info","ts":"2024-08-05T23:12:18.871334Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"3cf0731ec44cd9cd"}
	{"level":"warn","ts":"2024-08-05T23:12:18.878208Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"b8c6c7563d17d844","remote-peer-id-stream-handler":"b8c6c7563d17d844","remote-peer-id-from":"3cf0731ec44cd9cd"}
	
	
	==> kernel <==
	 23:12:25 up 3 min,  0 users,  load average: 0.12, 0.12, 0.05
	Linux ha-968000 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [0193799bafd1] <==
	I0805 23:11:50.543662       1 main.go:322] Node ha-968000-m02 has CIDR [10.244.1.0/24] 
	I0805 23:11:50.543753       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0805 23:11:50.543809       1 main.go:322] Node ha-968000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:12:00.536321       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0805 23:12:00.536406       1 main.go:299] handling current node
	I0805 23:12:00.536431       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0805 23:12:00.536445       1 main.go:322] Node ha-968000-m02 has CIDR [10.244.1.0/24] 
	I0805 23:12:00.536565       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0805 23:12:00.536623       1 main.go:322] Node ha-968000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:12:00.536679       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0805 23:12:00.536753       1 main.go:322] Node ha-968000-m04 has CIDR [10.244.3.0/24] 
	I0805 23:12:10.543704       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0805 23:12:10.543779       1 main.go:299] handling current node
	I0805 23:12:10.543801       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0805 23:12:10.543846       1 main.go:322] Node ha-968000-m02 has CIDR [10.244.1.0/24] 
	I0805 23:12:10.543961       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0805 23:12:10.544007       1 main.go:322] Node ha-968000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:12:10.544128       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0805 23:12:10.544175       1 main.go:322] Node ha-968000-m04 has CIDR [10.244.3.0/24] 
	I0805 23:12:20.536072       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0805 23:12:20.536113       1 main.go:299] handling current node
	I0805 23:12:20.536126       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0805 23:12:20.536131       1 main.go:322] Node ha-968000-m02 has CIDR [10.244.1.0/24] 
	I0805 23:12:20.536243       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0805 23:12:20.536270       1 main.go:322] Node ha-968000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [0eff729c401d] <==
	I0805 23:07:48.343605       1 main.go:322] Node ha-968000-m04 has CIDR [10.244.3.0/24] 
	I0805 23:07:58.351602       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0805 23:07:58.351677       1 main.go:322] Node ha-968000-m04 has CIDR [10.244.3.0/24] 
	I0805 23:07:58.351762       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0805 23:07:58.351807       1 main.go:299] handling current node
	I0805 23:07:58.351827       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0805 23:07:58.351841       1 main.go:322] Node ha-968000-m02 has CIDR [10.244.1.0/24] 
	I0805 23:07:58.351891       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0805 23:07:58.351905       1 main.go:322] Node ha-968000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:08:08.348631       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0805 23:08:08.348818       1 main.go:299] handling current node
	I0805 23:08:08.348908       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0805 23:08:08.349012       1 main.go:322] Node ha-968000-m02 has CIDR [10.244.1.0/24] 
	I0805 23:08:08.349214       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0805 23:08:08.349308       1 main.go:322] Node ha-968000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:08:08.349413       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0805 23:08:08.349484       1 main.go:322] Node ha-968000-m04 has CIDR [10.244.3.0/24] 
	I0805 23:08:18.343861       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0805 23:08:18.343942       1 main.go:322] Node ha-968000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:08:18.344043       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0805 23:08:18.344162       1 main.go:322] Node ha-968000-m04 has CIDR [10.244.3.0/24] 
	I0805 23:08:18.344272       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0805 23:08:18.344339       1 main.go:299] handling current node
	I0805 23:08:18.344378       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0805 23:08:18.344489       1 main.go:322] Node ha-968000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [7aac4c03a731] <==
	W0805 23:08:28.934152       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.934242       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.934382       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.934673       1 logging.go:59] [core] [Channel #46 SubChannel #47] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.934721       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.934770       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.934856       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.934936       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.934993       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.935038       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.934249       1 logging.go:59] [core] [Channel #7 SubChannel #8] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.935502       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.935594       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.935678       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.935758       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.934264       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.934278       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.935813       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.935856       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.935972       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.936008       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.936080       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.939206       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.939367       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:08:28.942086       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [b60d19a54816] <==
	I0805 23:09:21.358366       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0805 23:09:21.350043       1 controller.go:116] Starting legacy_token_tracking_controller
	I0805 23:09:21.370875       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0805 23:09:21.450095       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0805 23:09:21.450212       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0805 23:09:21.450382       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0805 23:09:21.450418       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0805 23:09:21.451646       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0805 23:09:21.455635       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0805 23:09:21.456070       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0805 23:09:21.456339       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0805 23:09:21.456586       1 aggregator.go:165] initial CRD sync complete...
	I0805 23:09:21.456619       1 autoregister_controller.go:141] Starting autoregister controller
	I0805 23:09:21.456625       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0805 23:09:21.456632       1 cache.go:39] Caches are synced for autoregister controller
	I0805 23:09:21.469346       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0805 23:09:21.469695       1 policy_source.go:224] refreshing policies
	I0805 23:09:21.471034       1 shared_informer.go:320] Caches are synced for configmaps
	W0805 23:09:21.483470       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.6]
	I0805 23:09:21.485903       1 controller.go:615] quota admission added evaluator for: endpoints
	I0805 23:09:21.498200       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0805 23:09:21.502190       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0805 23:09:21.548021       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0805 23:09:22.355586       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0805 23:09:22.617731       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.5]
	
	
	==> kube-controller-manager [24b87a0c98dc] <==
	I0805 23:10:00.593284       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="111.463µs"
	I0805 23:10:08.322324       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.921µs"
	I0805 23:10:11.182398       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.283834ms"
	I0805 23:10:11.184399       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="1.874184ms"
	I0805 23:10:13.827399       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.540772ms"
	I0805 23:10:13.827613       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="111.193µs"
	I0805 23:10:22.774890       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="48.13µs"
	I0805 23:10:22.794629       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-m7pj6 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-m7pj6\": the object has been modified; please apply your changes to the latest version and try again"
	I0805 23:10:22.796877       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"e530d70f-5afe-4156-b878-dad9e9636f3d", APIVersion:"v1", ResourceVersion:"251", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-m7pj6 EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-m7pj6": the object has been modified; please apply your changes to the latest version and try again
	I0805 23:10:22.814531       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="27.799975ms"
	I0805 23:10:22.815039       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="388.332µs"
	I0805 23:12:15.683833       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.626177ms"
	I0805 23:12:15.709136       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.278142ms"
	I0805 23:12:15.717411       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.213787ms"
	I0805 23:12:15.717491       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.897µs"
	I0805 23:12:15.762901       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.844163ms"
	I0805 23:12:15.771375       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.222037ms"
	I0805 23:12:15.772032       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="50.333µs"
	I0805 23:12:15.794314       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.209262ms"
	I0805 23:12:15.794391       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.285µs"
	I0805 23:12:17.812503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="61.993µs"
	I0805 23:12:18.633306       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.324µs"
	I0805 23:12:18.639954       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="35.299µs"
	I0805 23:12:18.642916       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="37.294µs"
	E0805 23:12:20.461885       1 garbagecollector.go:399] error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:"storage.k8s.io/v1", Kind:"CSINode", Name:"ha-968000-m03", UID:"b00b1fea-b44f-471e-82fe-ad73b3147c94", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:""}, dependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_
:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{state:0, sema:0x0}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Node", Name:"ha-968000-m03", UID:"250e469a-0959-40c3-9732-05667c63c72d", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: csinodes.storage.k8s.io "ha-968000-m03" not found
	
	
	==> kube-controller-manager [794441de3f19] <==
	I0805 23:06:10.953009       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="140.842137ms"
	I0805 23:06:10.970316       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.256226ms"
	I0805 23:06:10.981859       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.43482ms"
	I0805 23:06:10.982161       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.844µs"
	I0805 23:06:10.982675       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.081µs"
	I0805 23:06:10.983001       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.395µs"
	I0805 23:06:11.011942       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.408789ms"
	I0805 23:06:11.012096       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.128µs"
	I0805 23:06:11.149412       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.376µs"
	I0805 23:06:13.025618       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.346296ms"
	I0805 23:06:13.025933       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.389µs"
	I0805 23:06:13.649894       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.694881ms"
	I0805 23:06:13.650164       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.959µs"
	I0805 23:06:14.621712       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.914516ms"
	I0805 23:06:14.621776       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="26.079µs"
	E0805 23:06:38.510923       1 certificate_controller.go:146] Sync csr-tp8z2 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-tp8z2": the object has been modified; please apply your changes to the latest version and try again
	E0805 23:06:38.515110       1 certificate_controller.go:146] Sync csr-tp8z2 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-tp8z2": the object has been modified; please apply your changes to the latest version and try again
	I0805 23:06:38.609561       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-968000-m04\" does not exist"
	I0805 23:06:38.630882       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-968000-m04" podCIDRs=["10.244.3.0/24"]
	I0805 23:06:42.818552       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-968000-m04"
	I0805 23:07:01.374214       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-968000-m04"
	I0805 23:07:47.490230       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.437456ms"
	I0805 23:07:47.490444       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="39.713µs"
	I0805 23:07:50.255694       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.276166ms"
	I0805 23:07:50.255998       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="199.7µs"
	
	
	==> kube-proxy [236ffa329c7b] <==
	I0805 23:03:24.411171       1 server_linux.go:69] "Using iptables proxy"
	I0805 23:03:24.417641       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	I0805 23:03:24.460670       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 23:03:24.460733       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 23:03:24.460747       1 server_linux.go:165] "Using iptables Proxier"
	I0805 23:03:24.463438       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 23:03:24.463665       1 server.go:872] "Version info" version="v1.30.3"
	I0805 23:03:24.463697       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:03:24.464664       1 config.go:192] "Starting service config controller"
	I0805 23:03:24.464691       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 23:03:24.464706       1 config.go:101] "Starting endpoint slice config controller"
	I0805 23:03:24.464709       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 23:03:24.464932       1 config.go:319] "Starting node config controller"
	I0805 23:03:24.464937       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 23:03:24.564862       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0805 23:03:24.564952       1 shared_informer.go:320] Caches are synced for node config
	I0805 23:03:24.564967       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-proxy [3a4ca38aa00a] <==
	I0805 23:09:56.622719       1 server_linux.go:69] "Using iptables proxy"
	I0805 23:09:56.639713       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	I0805 23:09:56.685724       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 23:09:56.685766       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 23:09:56.685780       1 server_linux.go:165] "Using iptables Proxier"
	I0805 23:09:56.688954       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 23:09:56.689176       1 server.go:872] "Version info" version="v1.30.3"
	I0805 23:09:56.689205       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:09:56.691248       1 config.go:192] "Starting service config controller"
	I0805 23:09:56.691903       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 23:09:56.691947       1 config.go:101] "Starting endpoint slice config controller"
	I0805 23:09:56.691951       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 23:09:56.693386       1 config.go:319] "Starting node config controller"
	I0805 23:09:56.693414       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 23:09:56.792187       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0805 23:09:56.792473       1 shared_informer.go:320] Caches are synced for service config
	I0805 23:09:56.793440       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [66678698a7a8] <==
	E0805 23:03:07.355865       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0805 23:03:07.355526       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 23:03:07.356013       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 23:03:07.355515       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0805 23:03:07.356047       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0805 23:03:08.169414       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0805 23:03:08.169482       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0805 23:03:08.181890       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 23:03:08.181944       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0805 23:03:08.454577       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0805 23:03:08.454685       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0805 23:03:08.749738       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0805 23:06:10.780677       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-rmn5x\": pod busybox-fc5497c4f-rmn5x is already assigned to node \"ha-968000-m03\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-rmn5x" node="ha-968000-m03"
	E0805 23:06:10.780742       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod da945b47-6ef2-4df0-8bf2-9ae079ae2d84(default/busybox-fc5497c4f-rmn5x) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-rmn5x"
	E0805 23:06:10.780758       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-rmn5x\": pod busybox-fc5497c4f-rmn5x is already assigned to node \"ha-968000-m03\"" pod="default/busybox-fc5497c4f-rmn5x"
	I0805 23:06:10.780855       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-rmn5x" node="ha-968000-m03"
	E0805 23:06:38.649780       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-qptt6\": pod kube-proxy-qptt6 is already assigned to node \"ha-968000-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-qptt6" node="ha-968000-m04"
	E0805 23:06:38.649837       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod a826a636-1d05-4cca-a56d-d25a9cf41506(kube-system/kube-proxy-qptt6) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-qptt6"
	E0805 23:06:38.649849       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-qptt6\": pod kube-proxy-qptt6 is already assigned to node \"ha-968000-m04\"" pod="kube-system/kube-proxy-qptt6"
	I0805 23:06:38.649861       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-qptt6" node="ha-968000-m04"
	E0805 23:06:38.662121       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-5dshm\": pod kindnet-5dshm is already assigned to node \"ha-968000-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-5dshm" node="ha-968000-m04"
	E0805 23:06:38.662175       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 2641d2a9-a26a-4cbe-b8ea-99ed7c7af43c(kube-system/kindnet-5dshm) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-5dshm"
	E0805 23:06:38.662188       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-5dshm\": pod kindnet-5dshm is already assigned to node \"ha-968000-m04\"" pod="kube-system/kindnet-5dshm"
	I0805 23:06:38.662201       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-5dshm" node="ha-968000-m04"
	E0805 23:08:27.797554       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d830712616b7] <==
	I0805 23:09:02.830792       1 serving.go:380] Generated self-signed cert in-memory
	W0805 23:09:13.096106       1 authentication.go:368] Error looking up in-cluster authentication configuration: Get "https://192.169.0.5:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": net/http: TLS handshake timeout
	W0805 23:09:13.096131       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0805 23:09:13.096136       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0805 23:09:21.391714       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0805 23:09:21.391874       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:09:21.403353       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0805 23:09:21.403400       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0805 23:09:21.403774       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0805 23:09:21.403911       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0805 23:09:21.503960       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 05 23:10:22 ha-968000 kubelet[1559]: I0805 23:10:22.306938    1559 scope.go:117] "RemoveContainer" containerID="08f1d5be6bd28b75f94b7738ff81a5faf3ea26cc077f91e542745e41a27fb9b1"
	Aug 05 23:10:26 ha-968000 kubelet[1559]: I0805 23:10:26.805536    1559 scope.go:117] "RemoveContainer" containerID="9b4a6fce5b3c1066d545503e22783e35c718132d1b3257df8921cf2bf1f2bc01"
	Aug 05 23:10:26 ha-968000 kubelet[1559]: I0805 23:10:26.805819    1559 scope.go:117] "RemoveContainer" containerID="cfccdb420519d323e32884587cbb2325493555960556f383b6b5243f23bf5672"
	Aug 05 23:10:26 ha-968000 kubelet[1559]: E0805 23:10:26.805954    1559 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(52e2952a-756d-4f65-84f5-588cb6563297)\"" pod="kube-system/storage-provisioner" podUID="52e2952a-756d-4f65-84f5-588cb6563297"
	Aug 05 23:10:41 ha-968000 kubelet[1559]: I0805 23:10:41.306378    1559 scope.go:117] "RemoveContainer" containerID="cfccdb420519d323e32884587cbb2325493555960556f383b6b5243f23bf5672"
	Aug 05 23:10:41 ha-968000 kubelet[1559]: E0805 23:10:41.306800    1559 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(52e2952a-756d-4f65-84f5-588cb6563297)\"" pod="kube-system/storage-provisioner" podUID="52e2952a-756d-4f65-84f5-588cb6563297"
	Aug 05 23:10:54 ha-968000 kubelet[1559]: E0805 23:10:54.327441    1559 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:10:54 ha-968000 kubelet[1559]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:10:54 ha-968000 kubelet[1559]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:10:54 ha-968000 kubelet[1559]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:10:54 ha-968000 kubelet[1559]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:10:56 ha-968000 kubelet[1559]: I0805 23:10:56.307136    1559 scope.go:117] "RemoveContainer" containerID="cfccdb420519d323e32884587cbb2325493555960556f383b6b5243f23bf5672"
	Aug 05 23:10:56 ha-968000 kubelet[1559]: E0805 23:10:56.307284    1559 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(52e2952a-756d-4f65-84f5-588cb6563297)\"" pod="kube-system/storage-provisioner" podUID="52e2952a-756d-4f65-84f5-588cb6563297"
	Aug 05 23:11:08 ha-968000 kubelet[1559]: I0805 23:11:08.307092    1559 scope.go:117] "RemoveContainer" containerID="cfccdb420519d323e32884587cbb2325493555960556f383b6b5243f23bf5672"
	Aug 05 23:11:08 ha-968000 kubelet[1559]: E0805 23:11:08.307242    1559 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(52e2952a-756d-4f65-84f5-588cb6563297)\"" pod="kube-system/storage-provisioner" podUID="52e2952a-756d-4f65-84f5-588cb6563297"
	Aug 05 23:11:20 ha-968000 kubelet[1559]: I0805 23:11:20.305826    1559 scope.go:117] "RemoveContainer" containerID="cfccdb420519d323e32884587cbb2325493555960556f383b6b5243f23bf5672"
	Aug 05 23:11:20 ha-968000 kubelet[1559]: E0805 23:11:20.305965    1559 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(52e2952a-756d-4f65-84f5-588cb6563297)\"" pod="kube-system/storage-provisioner" podUID="52e2952a-756d-4f65-84f5-588cb6563297"
	Aug 05 23:11:34 ha-968000 kubelet[1559]: I0805 23:11:34.306692    1559 scope.go:117] "RemoveContainer" containerID="cfccdb420519d323e32884587cbb2325493555960556f383b6b5243f23bf5672"
	Aug 05 23:11:34 ha-968000 kubelet[1559]: E0805 23:11:34.309100    1559 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(52e2952a-756d-4f65-84f5-588cb6563297)\"" pod="kube-system/storage-provisioner" podUID="52e2952a-756d-4f65-84f5-588cb6563297"
	Aug 05 23:11:48 ha-968000 kubelet[1559]: I0805 23:11:48.306459    1559 scope.go:117] "RemoveContainer" containerID="cfccdb420519d323e32884587cbb2325493555960556f383b6b5243f23bf5672"
	Aug 05 23:11:54 ha-968000 kubelet[1559]: E0805 23:11:54.321829    1559 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:11:54 ha-968000 kubelet[1559]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:11:54 ha-968000 kubelet[1559]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:11:54 ha-968000 kubelet[1559]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:11:54 ha-968000 kubelet[1559]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-968000 -n ha-968000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-968000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-bkvjz
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-968000 describe pod busybox-fc5497c4f-bkvjz
helpers_test.go:282: (dbg) kubectl --context ha-968000 describe pod busybox-fc5497c4f-bkvjz:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-bkvjz
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v6kql (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-v6kql:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  9s (x2 over 11s)   default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  10s (x2 over 12s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  10s (x2 over 12s)  default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (12.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (76.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-968000 --wait=true -v=7 --alsologtostderr --driver=hyperkit 
E0805 16:13:13.600067    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-968000 --wait=true -v=7 --alsologtostderr --driver=hyperkit : exit status 90 (1m16.169598008s)

                                                
                                                
-- stdout --
	* [ha-968000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "ha-968000" primary control-plane node in "ha-968000" cluster
	* Restarting existing hyperkit VM for "ha-968000" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:12:52.343987    4197 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:12:52.344163    4197 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:12:52.344168    4197 out.go:304] Setting ErrFile to fd 2...
	I0805 16:12:52.344172    4197 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:12:52.344360    4197 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:12:52.345796    4197 out.go:298] Setting JSON to false
	I0805 16:12:52.367980    4197 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2543,"bootTime":1722897029,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0805 16:12:52.368073    4197 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:12:52.390333    4197 out.go:177] * [ha-968000] minikube v1.33.1 on Darwin 14.5
	I0805 16:12:52.432928    4197 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:12:52.432972    4197 notify.go:220] Checking for updates...
	I0805 16:12:52.475537    4197 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:12:52.517842    4197 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0805 16:12:52.538703    4197 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:12:52.559738    4197 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:12:52.580841    4197 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:12:52.602503    4197 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:12:52.603173    4197 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:12:52.603258    4197 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:12:52.612805    4197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52127
	I0805 16:12:52.613212    4197 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:12:52.613605    4197 main.go:141] libmachine: Using API Version  1
	I0805 16:12:52.613614    4197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:12:52.613880    4197 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:12:52.614003    4197 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:12:52.614202    4197 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:12:52.614431    4197 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:12:52.614456    4197 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:12:52.622760    4197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52129
	I0805 16:12:52.623129    4197 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:12:52.623483    4197 main.go:141] libmachine: Using API Version  1
	I0805 16:12:52.623507    4197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:12:52.623719    4197 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:12:52.623852    4197 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:12:52.652787    4197 out.go:177] * Using the hyperkit driver based on existing profile
	I0805 16:12:52.694661    4197 start.go:297] selected driver: hyperkit
	I0805 16:12:52.694687    4197 start.go:901] validating driver "hyperkit" against &{Name:ha-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:ha-968000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:12:52.694918    4197 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:12:52.695117    4197 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:12:52.695318    4197 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19373-1122/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0805 16:12:52.704780    4197 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0805 16:12:52.708642    4197 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:12:52.708667    4197 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0805 16:12:52.711292    4197 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:12:52.711329    4197 cni.go:84] Creating CNI manager for ""
	I0805 16:12:52.711340    4197 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0805 16:12:52.711413    4197 start.go:340] cluster config:
	{Name:ha-968000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-968000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:12:52.711547    4197 iso.go:125] acquiring lock: {Name:mk71e8d40232ece83c91dc82184f03ab93aee56e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:12:52.753797    4197 out.go:177] * Starting "ha-968000" primary control-plane node in "ha-968000" cluster
	I0805 16:12:52.774697    4197 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:12:52.774770    4197 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0805 16:12:52.774789    4197 cache.go:56] Caching tarball of preloaded images
	I0805 16:12:52.775005    4197 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:12:52.775024    4197 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:12:52.775202    4197 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:12:52.776192    4197 start.go:360] acquireMachinesLock for ha-968000: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:12:52.776343    4197 start.go:364] duration metric: took 125.495µs to acquireMachinesLock for "ha-968000"
	I0805 16:12:52.776387    4197 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:12:52.776402    4197 fix.go:54] fixHost starting: 
	I0805 16:12:52.776815    4197 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:12:52.776842    4197 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:12:52.786019    4197 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52131
	I0805 16:12:52.786374    4197 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:12:52.786753    4197 main.go:141] libmachine: Using API Version  1
	I0805 16:12:52.786776    4197 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:12:52.787029    4197 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:12:52.787153    4197 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:12:52.787258    4197 main.go:141] libmachine: (ha-968000) Calling .GetState
	I0805 16:12:52.787348    4197 main.go:141] libmachine: (ha-968000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:12:52.787447    4197 main.go:141] libmachine: (ha-968000) DBG | hyperkit pid from json: 4025
	I0805 16:12:52.788357    4197 main.go:141] libmachine: (ha-968000) DBG | hyperkit pid 4025 missing from process table
	I0805 16:12:52.788384    4197 fix.go:112] recreateIfNeeded on ha-968000: state=Stopped err=<nil>
	I0805 16:12:52.788399    4197 main.go:141] libmachine: (ha-968000) Calling .DriverName
	W0805 16:12:52.788493    4197 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:12:52.830843    4197 out.go:177] * Restarting existing hyperkit VM for "ha-968000" ...
	I0805 16:12:52.851720    4197 main.go:141] libmachine: (ha-968000) Calling .Start
	I0805 16:12:52.851999    4197 main.go:141] libmachine: (ha-968000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:12:52.852045    4197 main.go:141] libmachine: (ha-968000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/hyperkit.pid
	I0805 16:12:52.853853    4197 main.go:141] libmachine: (ha-968000) DBG | hyperkit pid 4025 missing from process table
	I0805 16:12:52.853869    4197 main.go:141] libmachine: (ha-968000) DBG | pid 4025 is in state "Stopped"
	I0805 16:12:52.853885    4197 main.go:141] libmachine: (ha-968000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/hyperkit.pid...
	I0805 16:12:52.854066    4197 main.go:141] libmachine: (ha-968000) DBG | Using UUID a9f347e2-e9fc-4e4f-b87b-350754bafb6d
	I0805 16:12:52.976038    4197 main.go:141] libmachine: (ha-968000) DBG | Generated MAC 3e:79:a8:cb:37:4b
	I0805 16:12:52.976061    4197 main.go:141] libmachine: (ha-968000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000
	I0805 16:12:52.976182    4197 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:12:52 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a9f347e2-e9fc-4e4f-b87b-350754bafb6d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b8960)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:12:52.976208    4197 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:12:52 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a9f347e2-e9fc-4e4f-b87b-350754bafb6d", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003b8960)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0805 16:12:52.976254    4197 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:12:52 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "a9f347e2-e9fc-4e4f-b87b-350754bafb6d", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/ha-968000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000"}
	I0805 16:12:52.976295    4197 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:12:52 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U a9f347e2-e9fc-4e4f-b87b-350754bafb6d -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/ha-968000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-968000"
	I0805 16:12:52.976308    4197 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:12:52 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:12:52.977859    4197 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:12:52 DEBUG: hyperkit: Pid is 4210
	I0805 16:12:52.978340    4197 main.go:141] libmachine: (ha-968000) DBG | Attempt 0
	I0805 16:12:52.978355    4197 main.go:141] libmachine: (ha-968000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:12:52.978439    4197 main.go:141] libmachine: (ha-968000) DBG | hyperkit pid from json: 4210
	I0805 16:12:52.980099    4197 main.go:141] libmachine: (ha-968000) DBG | Searching for 3e:79:a8:cb:37:4b in /var/db/dhcpd_leases ...
	I0805 16:12:52.980165    4197 main.go:141] libmachine: (ha-968000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0805 16:12:52.980191    4197 main.go:141] libmachine: (ha-968000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:12:52.980218    4197 main.go:141] libmachine: (ha-968000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:12:52.980234    4197 main.go:141] libmachine: (ha-968000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:12:52.980261    4197 main.go:141] libmachine: (ha-968000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2acfd}
	I0805 16:12:52.980275    4197 main.go:141] libmachine: (ha-968000) DBG | Found match: 3e:79:a8:cb:37:4b
	I0805 16:12:52.980279    4197 main.go:141] libmachine: (ha-968000) Calling .GetConfigRaw
	I0805 16:12:52.980315    4197 main.go:141] libmachine: (ha-968000) DBG | IP: 192.169.0.5
	I0805 16:12:52.981004    4197 main.go:141] libmachine: (ha-968000) Calling .GetIP
	I0805 16:12:52.981239    4197 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/ha-968000/config.json ...
	I0805 16:12:52.981692    4197 machine.go:94] provisionDockerMachine start ...
	I0805 16:12:52.981702    4197 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:12:52.981813    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:12:52.981988    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:12:52.982114    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:12:52.982246    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:12:52.982338    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:12:52.982466    4197 main.go:141] libmachine: Using SSH client type: native
	I0805 16:12:52.982737    4197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb51e0c0] 0xb520e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0805 16:12:52.982745    4197 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 16:12:52.986045    4197 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:12:52 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:12:53.044291    4197 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:12:53 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:12:53.045021    4197 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:12:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:12:53.045040    4197 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:12:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:12:53.045049    4197 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:12:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:12:53.045057    4197 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:12:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:12:53.427203    4197 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:12:53 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:12:53.427219    4197 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:12:53 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:12:53.542516    4197 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:12:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:12:53.542534    4197 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:12:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:12:53.542545    4197 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:12:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:12:53.542556    4197 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:12:53 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:12:53.543394    4197 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:12:53 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:12:53.543405    4197 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:12:53 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:12:59.112197    4197 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:12:59 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:12:59.112280    4197 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:12:59 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:12:59.112292    4197 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:12:59 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:12:59.136631    4197 main.go:141] libmachine: (ha-968000) DBG | 2024/08/05 16:12:59 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:13:04.053687    4197 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 16:13:04.053702    4197 main.go:141] libmachine: (ha-968000) Calling .GetMachineName
	I0805 16:13:04.053881    4197 buildroot.go:166] provisioning hostname "ha-968000"
	I0805 16:13:04.053891    4197 main.go:141] libmachine: (ha-968000) Calling .GetMachineName
	I0805 16:13:04.053979    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:13:04.054078    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:13:04.054178    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:13:04.054288    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:13:04.054375    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:13:04.054516    4197 main.go:141] libmachine: Using SSH client type: native
	I0805 16:13:04.054674    4197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb51e0c0] 0xb520e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0805 16:13:04.054682    4197 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-968000 && echo "ha-968000" | sudo tee /etc/hostname
	I0805 16:13:04.124343    4197 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-968000
	
	I0805 16:13:04.124367    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:13:04.124495    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:13:04.124595    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:13:04.124681    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:13:04.124777    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:13:04.124902    4197 main.go:141] libmachine: Using SSH client type: native
	I0805 16:13:04.125037    4197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb51e0c0] 0xb520e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0805 16:13:04.125048    4197 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-968000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-968000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-968000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:13:04.192331    4197 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:13:04.192352    4197 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:13:04.192365    4197 buildroot.go:174] setting up certificates
	I0805 16:13:04.192374    4197 provision.go:84] configureAuth start
	I0805 16:13:04.192381    4197 main.go:141] libmachine: (ha-968000) Calling .GetMachineName
	I0805 16:13:04.192518    4197 main.go:141] libmachine: (ha-968000) Calling .GetIP
	I0805 16:13:04.192627    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:13:04.192704    4197 provision.go:143] copyHostCerts
	I0805 16:13:04.192732    4197 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:13:04.192798    4197 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:13:04.192807    4197 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:13:04.192948    4197 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:13:04.193156    4197 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:13:04.193198    4197 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:13:04.193203    4197 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:13:04.193295    4197 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:13:04.193448    4197 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:13:04.193488    4197 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:13:04.193493    4197 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:13:04.193572    4197 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:13:04.193733    4197 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.ha-968000 san=[127.0.0.1 192.169.0.5 ha-968000 localhost minikube]
	I0805 16:13:04.331133    4197 provision.go:177] copyRemoteCerts
	I0805 16:13:04.331186    4197 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:13:04.331203    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:13:04.331338    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:13:04.331429    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:13:04.331514    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:13:04.331609    4197 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/id_rsa Username:docker}
	I0805 16:13:04.367054    4197 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:13:04.367134    4197 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:13:04.392319    4197 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:13:04.392402    4197 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0805 16:13:04.411646    4197 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:13:04.411711    4197 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:13:04.430362    4197 provision.go:87] duration metric: took 237.96771ms to configureAuth
	I0805 16:13:04.430380    4197 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:13:04.430544    4197 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:13:04.430577    4197 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:13:04.430711    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:13:04.430804    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:13:04.430878    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:13:04.430958    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:13:04.431039    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:13:04.431137    4197 main.go:141] libmachine: Using SSH client type: native
	I0805 16:13:04.431261    4197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb51e0c0] 0xb520e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0805 16:13:04.431268    4197 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:13:04.492084    4197 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:13:04.492096    4197 buildroot.go:70] root file system type: tmpfs
	I0805 16:13:04.492183    4197 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:13:04.492197    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:13:04.492345    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:13:04.492441    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:13:04.492553    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:13:04.492650    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:13:04.492797    4197 main.go:141] libmachine: Using SSH client type: native
	I0805 16:13:04.492985    4197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb51e0c0] 0xb520e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0805 16:13:04.493029    4197 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:13:04.562180    4197 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:13:04.562202    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:13:04.562357    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:13:04.562465    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:13:04.562569    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:13:04.562671    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:13:04.562810    4197 main.go:141] libmachine: Using SSH client type: native
	I0805 16:13:04.562949    4197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb51e0c0] 0xb520e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0805 16:13:04.562965    4197 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:13:06.261144    4197 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:13:06.261158    4197 machine.go:97] duration metric: took 13.279453855s to provisionDockerMachine
	I0805 16:13:06.261173    4197 start.go:293] postStartSetup for "ha-968000" (driver="hyperkit")
	I0805 16:13:06.261181    4197 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:13:06.261191    4197 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:13:06.261380    4197 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:13:06.261400    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:13:06.261493    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:13:06.261571    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:13:06.261669    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:13:06.261756    4197 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/id_rsa Username:docker}
	I0805 16:13:06.297624    4197 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:13:06.300660    4197 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:13:06.300677    4197 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:13:06.300776    4197 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:13:06.300966    4197 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:13:06.300972    4197 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:13:06.301180    4197 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:13:06.308876    4197 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:13:06.328051    4197 start.go:296] duration metric: took 66.869153ms for postStartSetup
	I0805 16:13:06.328073    4197 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:13:06.328240    4197 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0805 16:13:06.328253    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:13:06.328343    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:13:06.328435    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:13:06.328515    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:13:06.328593    4197 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/id_rsa Username:docker}
	I0805 16:13:06.364733    4197 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0805 16:13:06.364806    4197 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0805 16:13:06.398172    4197 fix.go:56] duration metric: took 13.621767744s for fixHost
	I0805 16:13:06.398198    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:13:06.398335    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:13:06.398423    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:13:06.398504    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:13:06.398599    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:13:06.398716    4197 main.go:141] libmachine: Using SSH client type: native
	I0805 16:13:06.398873    4197 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb51e0c0] 0xb520e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0805 16:13:06.398880    4197 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0805 16:13:06.461800    4197 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722899586.610355122
	
	I0805 16:13:06.461815    4197 fix.go:216] guest clock: 1722899586.610355122
	I0805 16:13:06.461821    4197 fix.go:229] Guest: 2024-08-05 16:13:06.610355122 -0700 PDT Remote: 2024-08-05 16:13:06.398188 -0700 PDT m=+14.088755481 (delta=212.167122ms)
	I0805 16:13:06.461844    4197 fix.go:200] guest clock delta is within tolerance: 212.167122ms
	I0805 16:13:06.461849    4197 start.go:83] releasing machines lock for "ha-968000", held for 13.685490427s
	I0805 16:13:06.461872    4197 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:13:06.461995    4197 main.go:141] libmachine: (ha-968000) Calling .GetIP
	I0805 16:13:06.462092    4197 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:13:06.462439    4197 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:13:06.462528    4197 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:13:06.462604    4197 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:13:06.462633    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:13:06.462645    4197 ssh_runner.go:195] Run: cat /version.json
	I0805 16:13:06.462656    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:13:06.462728    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:13:06.462739    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:13:06.462831    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:13:06.462853    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:13:06.462917    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:13:06.462935    4197 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:13:06.462997    4197 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/id_rsa Username:docker}
	I0805 16:13:06.463037    4197 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/id_rsa Username:docker}
	I0805 16:13:06.514827    4197 ssh_runner.go:195] Run: systemctl --version
	I0805 16:13:06.519005    4197 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0805 16:13:06.523375    4197 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:13:06.523427    4197 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:13:06.571927    4197 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:13:06.571943    4197 start.go:495] detecting cgroup driver to use...
	I0805 16:13:06.572040    4197 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:13:06.587562    4197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:13:06.596370    4197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:13:06.605060    4197 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:13:06.605118    4197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:13:06.613652    4197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:13:06.622473    4197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:13:06.631181    4197 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:13:06.639758    4197 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:13:06.648618    4197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:13:06.657416    4197 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:13:06.666421    4197 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:13:06.675220    4197 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:13:06.683172    4197 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:13:06.691181    4197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:13:06.786155    4197 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:13:06.804324    4197 start.go:495] detecting cgroup driver to use...
	I0805 16:13:06.804407    4197 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:13:06.827470    4197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:13:06.839312    4197 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:13:06.859000    4197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:13:06.870680    4197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:13:06.882067    4197 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:13:06.905361    4197 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:13:06.916666    4197 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:13:06.931635    4197 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:13:06.934591    4197 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:13:06.942576    4197 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:13:06.955801    4197 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:13:07.061784    4197 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:13:07.155287    4197 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:13:07.155362    4197 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:13:07.169294    4197 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:13:07.265873    4197 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:14:08.297619    4197 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.031708492s)
	I0805 16:14:08.297682    4197 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0805 16:14:08.332983    4197 out.go:177] 
	W0805 16:14:08.353974    4197 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 05 23:13:05 ha-968000 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:13:05 ha-968000 dockerd[494]: time="2024-08-05T23:13:05.025796542Z" level=info msg="Starting up"
	Aug 05 23:13:05 ha-968000 dockerd[494]: time="2024-08-05T23:13:05.026275955Z" level=info msg="containerd not running, starting managed containerd"
	Aug 05 23:13:05 ha-968000 dockerd[494]: time="2024-08-05T23:13:05.026767606Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=500
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.043878450Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.060682015Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.060703963Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.060744191Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.060754740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.060883504Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.060946883Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.061056223Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.061091854Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.061104918Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.061112699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.061219480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.061377943Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.063465443Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.063502347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.063607087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.063642939Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.063750262Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.063792319Z" level=info msg="metadata content store policy set" policy=shared
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066277173Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066320051Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066332061Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066341119Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066388227Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066430646Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066591660Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066661984Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066695507Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066706319Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066715023Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066722989Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066731088Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066743141Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066752209Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066760915Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066773966Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066783969Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066797297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066812164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066827922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066838660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066847354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066855628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066863124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066870940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066878872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066887645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066894952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066902321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066914436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066926986Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066940793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066949271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066956394Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.067007106Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.067068873Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.067079676Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.067087946Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.067094451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.067102248Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.067109074Z" level=info msg="NRI interface is disabled by configuration."
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.067277484Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.067366194Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.067442840Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.067477127Z" level=info msg="containerd successfully booted in 0.024588s"
	Aug 05 23:13:06 ha-968000 dockerd[494]: time="2024-08-05T23:13:06.049983576Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 05 23:13:06 ha-968000 dockerd[494]: time="2024-08-05T23:13:06.092824198Z" level=info msg="Loading containers: start."
	Aug 05 23:13:06 ha-968000 dockerd[494]: time="2024-08-05T23:13:06.277054987Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 05 23:13:06 ha-968000 dockerd[494]: time="2024-08-05T23:13:06.337947662Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 05 23:13:06 ha-968000 dockerd[494]: time="2024-08-05T23:13:06.383062060Z" level=info msg="Loading containers: done."
	Aug 05 23:13:06 ha-968000 dockerd[494]: time="2024-08-05T23:13:06.389662214Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 05 23:13:06 ha-968000 dockerd[494]: time="2024-08-05T23:13:06.389882553Z" level=info msg="Daemon has completed initialization"
	Aug 05 23:13:06 ha-968000 dockerd[494]: time="2024-08-05T23:13:06.408604314Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 05 23:13:06 ha-968000 dockerd[494]: time="2024-08-05T23:13:06.408689171Z" level=info msg="API listen on [::]:2376"
	Aug 05 23:13:06 ha-968000 systemd[1]: Started Docker Application Container Engine.
	Aug 05 23:13:07 ha-968000 dockerd[494]: time="2024-08-05T23:13:07.427608164Z" level=info msg="Processing signal 'terminated'"
	Aug 05 23:13:07 ha-968000 dockerd[494]: time="2024-08-05T23:13:07.429645573Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 05 23:13:07 ha-968000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 05 23:13:07 ha-968000 dockerd[494]: time="2024-08-05T23:13:07.430210127Z" level=info msg="Daemon shutdown complete"
	Aug 05 23:13:07 ha-968000 dockerd[494]: time="2024-08-05T23:13:07.430279854Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 05 23:13:07 ha-968000 dockerd[494]: time="2024-08-05T23:13:07.430331160Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 05 23:13:08 ha-968000 systemd[1]: docker.service: Deactivated successfully.
	Aug 05 23:13:08 ha-968000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 05 23:13:08 ha-968000 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:13:08 ha-968000 dockerd[1141]: time="2024-08-05T23:13:08.463398783Z" level=info msg="Starting up"
	Aug 05 23:14:08 ha-968000 dockerd[1141]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 05 23:14:08 ha-968000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 05 23:14:08 ha-968000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 05 23:14:08 ha-968000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 05 23:13:05 ha-968000 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:13:05 ha-968000 dockerd[494]: time="2024-08-05T23:13:05.025796542Z" level=info msg="Starting up"
	Aug 05 23:13:05 ha-968000 dockerd[494]: time="2024-08-05T23:13:05.026275955Z" level=info msg="containerd not running, starting managed containerd"
	Aug 05 23:13:05 ha-968000 dockerd[494]: time="2024-08-05T23:13:05.026767606Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=500
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.043878450Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.060682015Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.060703963Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.060744191Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.060754740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.060883504Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.060946883Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.061056223Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.061091854Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.061104918Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.061112699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.061219480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.061377943Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.063465443Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.063502347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.063607087Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.063642939Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.063750262Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.063792319Z" level=info msg="metadata content store policy set" policy=shared
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066277173Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066320051Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066332061Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066341119Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066388227Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066430646Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066591660Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066661984Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066695507Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066706319Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066715023Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066722989Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066731088Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066743141Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066752209Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066760915Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066773966Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066783969Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066797297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066812164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066827922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066838660Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066847354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066855628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066863124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066870940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066878872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066887645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066894952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066902321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066914436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066926986Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066940793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066949271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.066956394Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.067007106Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.067068873Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.067079676Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.067087946Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.067094451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.067102248Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.067109074Z" level=info msg="NRI interface is disabled by configuration."
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.067277484Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.067366194Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.067442840Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 05 23:13:05 ha-968000 dockerd[500]: time="2024-08-05T23:13:05.067477127Z" level=info msg="containerd successfully booted in 0.024588s"
	Aug 05 23:13:06 ha-968000 dockerd[494]: time="2024-08-05T23:13:06.049983576Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 05 23:13:06 ha-968000 dockerd[494]: time="2024-08-05T23:13:06.092824198Z" level=info msg="Loading containers: start."
	Aug 05 23:13:06 ha-968000 dockerd[494]: time="2024-08-05T23:13:06.277054987Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 05 23:13:06 ha-968000 dockerd[494]: time="2024-08-05T23:13:06.337947662Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 05 23:13:06 ha-968000 dockerd[494]: time="2024-08-05T23:13:06.383062060Z" level=info msg="Loading containers: done."
	Aug 05 23:13:06 ha-968000 dockerd[494]: time="2024-08-05T23:13:06.389662214Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 05 23:13:06 ha-968000 dockerd[494]: time="2024-08-05T23:13:06.389882553Z" level=info msg="Daemon has completed initialization"
	Aug 05 23:13:06 ha-968000 dockerd[494]: time="2024-08-05T23:13:06.408604314Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 05 23:13:06 ha-968000 dockerd[494]: time="2024-08-05T23:13:06.408689171Z" level=info msg="API listen on [::]:2376"
	Aug 05 23:13:06 ha-968000 systemd[1]: Started Docker Application Container Engine.
	Aug 05 23:13:07 ha-968000 dockerd[494]: time="2024-08-05T23:13:07.427608164Z" level=info msg="Processing signal 'terminated'"
	Aug 05 23:13:07 ha-968000 dockerd[494]: time="2024-08-05T23:13:07.429645573Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 05 23:13:07 ha-968000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 05 23:13:07 ha-968000 dockerd[494]: time="2024-08-05T23:13:07.430210127Z" level=info msg="Daemon shutdown complete"
	Aug 05 23:13:07 ha-968000 dockerd[494]: time="2024-08-05T23:13:07.430279854Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 05 23:13:07 ha-968000 dockerd[494]: time="2024-08-05T23:13:07.430331160Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 05 23:13:08 ha-968000 systemd[1]: docker.service: Deactivated successfully.
	Aug 05 23:13:08 ha-968000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 05 23:13:08 ha-968000 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:13:08 ha-968000 dockerd[1141]: time="2024-08-05T23:13:08.463398783Z" level=info msg="Starting up"
	Aug 05 23:14:08 ha-968000 dockerd[1141]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 05 23:14:08 ha-968000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 05 23:14:08 ha-968000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 05 23:14:08 ha-968000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0805 16:14:08.354111    4197 out.go:239] * 
	* 
	W0805 16:14:08.355420    4197 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:14:08.417794    4197 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-amd64 start -p ha-968000 --wait=true -v=7 --alsologtostderr --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-968000 -n ha-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-968000 -n ha-968000: exit status 6 (149.551242ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 16:14:08.610491    4242 status.go:417] kubeconfig endpoint: get endpoint: "ha-968000" does not appear in /Users/jenkins/minikube-integration/19373-1122/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-968000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (76.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:413: expected profile "ha-968000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-968000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-968000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACoun
t\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-968000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"Ku
bernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugi
n\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":fa
lse,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-968000 -n ha-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-968000 -n ha-968000: exit status 6 (146.733631ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 16:14:08.929046    4253 status.go:417] kubeconfig endpoint: get endpoint: "ha-968000" does not appear in /Users/jenkins/minikube-integration/19373-1122/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-968000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-968000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p ha-968000 --control-plane -v=7 --alsologtostderr: exit status 83 (151.061766ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-968000-m02 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-968000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:14:08.994832    4258 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:14:08.995131    4258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:14:08.995137    4258 out.go:304] Setting ErrFile to fd 2...
	I0805 16:14:08.995141    4258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:14:08.995328    4258 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:14:08.995651    4258 mustload.go:65] Loading cluster: ha-968000
	I0805 16:14:08.995961    4258 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:14:08.996359    4258 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:14:08.996395    4258 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:14:09.004615    4258 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52177
	I0805 16:14:09.005024    4258 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:14:09.005441    4258 main.go:141] libmachine: Using API Version  1
	I0805 16:14:09.005456    4258 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:14:09.005712    4258 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:14:09.005829    4258 main.go:141] libmachine: (ha-968000) Calling .GetState
	I0805 16:14:09.005922    4258 main.go:141] libmachine: (ha-968000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:14:09.005987    4258 main.go:141] libmachine: (ha-968000) DBG | hyperkit pid from json: 4210
	I0805 16:14:09.006941    4258 host.go:66] Checking if "ha-968000" exists ...
	I0805 16:14:09.007187    4258 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:14:09.007210    4258 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:14:09.015315    4258 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52179
	I0805 16:14:09.015652    4258 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:14:09.016005    4258 main.go:141] libmachine: Using API Version  1
	I0805 16:14:09.016025    4258 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:14:09.016228    4258 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:14:09.016368    4258 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:14:09.016708    4258 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:14:09.016733    4258 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:14:09.024784    4258 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52181
	I0805 16:14:09.025106    4258 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:14:09.025415    4258 main.go:141] libmachine: Using API Version  1
	I0805 16:14:09.025425    4258 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:14:09.025622    4258 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:14:09.025742    4258 main.go:141] libmachine: (ha-968000-m02) Calling .GetState
	I0805 16:14:09.025834    4258 main.go:141] libmachine: (ha-968000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:14:09.025904    4258 main.go:141] libmachine: (ha-968000-m02) DBG | hyperkit pid from json: 4036
	I0805 16:14:09.026823    4258 main.go:141] libmachine: (ha-968000-m02) DBG | hyperkit pid 4036 missing from process table
	I0805 16:14:09.047896    4258 out.go:177] * The control-plane node ha-968000-m02 host is not running: state=Stopped
	I0805 16:14:09.068831    4258 out.go:177]   To start a cluster, run: "minikube start -p ha-968000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-amd64 node add -p ha-968000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-968000 -n ha-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-968000 -n ha-968000: exit status 6 (144.908339ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 16:14:09.225695    4263 status.go:417] kubeconfig endpoint: get endpoint: "ha-968000" does not appear in /Users/jenkins/minikube-integration/19373-1122/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-968000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:304: expected profile "ha-968000" in json of 'profile list' to include 4 nodes but have 3 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-968000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-968000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServe
rPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-968000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"KubernetesVersion\
":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\
":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMet
rics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
ha_test.go:307: expected profile "ha-968000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-968000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-968000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.3\",\"ClusterName\":\"ha-968000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.5\",\"Port\":8443,\"Kuber
netesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.169.0.8\",\"Port\":0,\"KubernetesVersion\":\"v1.30.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\"
:false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false
,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-968000 -n ha-968000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-968000 -n ha-968000: exit status 6 (146.471788ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 16:14:09.543971    4274 status.go:417] kubeconfig endpoint: get endpoint: "ha-968000" does not appear in /Users/jenkins/minikube-integration/19373-1122/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ha-968000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.32s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (136.87s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-684000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p mount-start-1-684000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : exit status 80 (2m16.797812291s)

                                                
                                                
-- stdout --
	* [mount-start-1-684000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting minikube without Kubernetes in cluster mount-start-1-684000
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "mount-start-1-684000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 46:f4:16:df:a1:2e
	* Failed to start hyperkit VM. Running "minikube delete -p mount-start-1-684000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7a:d0:36:1b:9a:6a
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7a:d0:36:1b:9a:6a
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-amd64 start -p mount-start-1-684000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-684000 -n mount-start-1-684000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-684000 -n mount-start-1-684000: exit status 7 (76.038635ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 16:20:26.794293    4622 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0805 16:20:26.794315    4622 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-684000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMountStart/serial/StartWithMountFirst (136.87s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (144.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-985000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E0805 16:21:19.142127    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0805 16:21:50.546068    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
E0805 16:22:42.195526    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-985000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : exit status 90 (2m21.695257379s)

                                                
                                                
-- stdout --
	* [multinode-985000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "multinode-985000" primary control-plane node in "multinode-985000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	* Starting "multinode-985000-m02" worker node in "multinode-985000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Found network options:
	  - NO_PROXY=192.169.0.13
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:20:32.303800    4640 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:20:32.303980    4640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:20:32.303986    4640 out.go:304] Setting ErrFile to fd 2...
	I0805 16:20:32.303990    4640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:20:32.304163    4640 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:20:32.305609    4640 out.go:298] Setting JSON to false
	I0805 16:20:32.329307    4640 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3003,"bootTime":1722897029,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0805 16:20:32.329400    4640 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:20:32.351877    4640 out.go:177] * [multinode-985000] minikube v1.33.1 on Darwin 14.5
	I0805 16:20:32.392940    4640 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:20:32.393020    4640 notify.go:220] Checking for updates...
	I0805 16:20:32.435775    4640 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:20:32.456783    4640 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0805 16:20:32.477872    4640 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:20:32.499010    4640 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:20:32.519936    4640 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:20:32.541363    4640 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:20:32.571784    4640 out.go:177] * Using the hyperkit driver based on user configuration
	I0805 16:20:32.613992    4640 start.go:297] selected driver: hyperkit
	I0805 16:20:32.614020    4640 start.go:901] validating driver "hyperkit" against <nil>
	I0805 16:20:32.614042    4640 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:20:32.618322    4640 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:20:32.618456    4640 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19373-1122/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0805 16:20:32.627075    4640 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0805 16:20:32.631391    4640 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:20:32.631417    4640 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0805 16:20:32.631452    4640 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 16:20:32.631678    4640 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:20:32.631709    4640 cni.go:84] Creating CNI manager for ""
	I0805 16:20:32.631719    4640 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0805 16:20:32.631730    4640 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0805 16:20:32.631823    4640 start.go:340] cluster config:
	{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:20:32.631925    4640 iso.go:125] acquiring lock: {Name:mk71e8d40232ece83c91dc82184f03ab93aee56e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:20:32.673756    4640 out.go:177] * Starting "multinode-985000" primary control-plane node in "multinode-985000" cluster
	I0805 16:20:32.695001    4640 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:20:32.695088    4640 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0805 16:20:32.695107    4640 cache.go:56] Caching tarball of preloaded images
	I0805 16:20:32.695319    4640 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:20:32.695338    4640 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:20:32.695809    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:20:32.695848    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json: {Name:mk470c2e849a0c86ee251e86e74d9f6dfdb47dad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:32.696485    4640 start.go:360] acquireMachinesLock for multinode-985000: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:20:32.696593    4640 start.go:364] duration metric: took 88.666µs to acquireMachinesLock for "multinode-985000"
	I0805 16:20:32.696646    4640 start.go:93] Provisioning new machine with config: &{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:20:32.696745    4640 start.go:125] createHost starting for "" (driver="hyperkit")
	I0805 16:20:32.718059    4640 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 16:20:32.718351    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:20:32.718416    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:20:32.728195    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52477
	I0805 16:20:32.728547    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:20:32.728938    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:20:32.728948    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:20:32.729147    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:20:32.729251    4640 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:20:32.729369    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:32.729498    4640 start.go:159] libmachine.API.Create for "multinode-985000" (driver="hyperkit")
	I0805 16:20:32.729521    4640 client.go:168] LocalClient.Create starting
	I0805 16:20:32.729556    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem
	I0805 16:20:32.729608    4640 main.go:141] libmachine: Decoding PEM data...
	I0805 16:20:32.729625    4640 main.go:141] libmachine: Parsing certificate...
	I0805 16:20:32.729685    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem
	I0805 16:20:32.729724    4640 main.go:141] libmachine: Decoding PEM data...
	I0805 16:20:32.729737    4640 main.go:141] libmachine: Parsing certificate...
	I0805 16:20:32.729749    4640 main.go:141] libmachine: Running pre-create checks...
	I0805 16:20:32.729760    4640 main.go:141] libmachine: (multinode-985000) Calling .PreCreateCheck
	I0805 16:20:32.729840    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:32.729974    4640 main.go:141] libmachine: (multinode-985000) Calling .GetConfigRaw
	I0805 16:20:32.739224    4640 main.go:141] libmachine: Creating machine...
	I0805 16:20:32.739247    4640 main.go:141] libmachine: (multinode-985000) Calling .Create
	I0805 16:20:32.739475    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:32.739754    4640 main.go:141] libmachine: (multinode-985000) DBG | I0805 16:20:32.739457    4648 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:20:32.739852    4640 main.go:141] libmachine: (multinode-985000) Downloading /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1122/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 16:20:32.920622    4640 main.go:141] libmachine: (multinode-985000) DBG | I0805 16:20:32.920524    4648 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa...
	I0805 16:20:32.957084    4640 main.go:141] libmachine: (multinode-985000) DBG | I0805 16:20:32.957005    4648 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/multinode-985000.rawdisk...
	I0805 16:20:32.957123    4640 main.go:141] libmachine: (multinode-985000) DBG | Writing magic tar header
	I0805 16:20:32.957134    4640 main.go:141] libmachine: (multinode-985000) DBG | Writing SSH key tar header
	I0805 16:20:32.957531    4640 main.go:141] libmachine: (multinode-985000) DBG | I0805 16:20:32.957490    4648 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000 ...
	I0805 16:20:33.331110    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:33.331140    4640 main.go:141] libmachine: (multinode-985000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid
	I0805 16:20:33.331159    4640 main.go:141] libmachine: (multinode-985000) DBG | Using UUID 3ac698fc-f622-443b-898d-9b152fa64288
	I0805 16:20:33.442582    4640 main.go:141] libmachine: (multinode-985000) DBG | Generated MAC e2:6:14:d2:13:ae
	I0805 16:20:33.442603    4640 main.go:141] libmachine: (multinode-985000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000
	I0805 16:20:33.442636    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3ac698fc-f622-443b-898d-9b152fa64288", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0805 16:20:33.442669    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3ac698fc-f622-443b-898d-9b152fa64288", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0805 16:20:33.442719    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "3ac698fc-f622-443b-898d-9b152fa64288", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/multinode-985000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage,/Users/jenkins/minikube-integration/1937
3-1122/.minikube/machines/multinode-985000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"}
	I0805 16:20:33.442758    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 3ac698fc-f622-443b-898d-9b152fa64288 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/multinode-985000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"
	I0805 16:20:33.442774    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:20:33.445733    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: Pid is 4651
	I0805 16:20:33.446145    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 0
	I0805 16:20:33.446167    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:33.446227    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:33.447073    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:33.447135    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:33.447152    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:33.447186    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:33.447202    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:33.447214    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:33.447222    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:33.447229    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:33.447247    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:33.447269    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:33.447287    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:33.447304    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:33.447321    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:33.453446    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:20:33.506623    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:20:33.507268    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:20:33.507283    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:20:33.507290    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:20:33.507298    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:20:33.891346    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:20:33.891387    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:20:34.006163    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:20:34.006177    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:20:34.006189    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:20:34.006208    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:20:34.007050    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:20:34.007082    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:20:35.448624    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 1
	I0805 16:20:35.448640    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:35.448724    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:35.449516    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:35.449591    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:35.449607    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:35.449619    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:35.449625    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:35.449648    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:35.449664    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:35.449695    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:35.449711    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:35.449719    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:35.449725    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:35.449731    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:35.449738    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:37.449834    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 2
	I0805 16:20:37.449851    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:37.449867    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:37.450676    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:37.450690    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:37.450697    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:37.450707    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:37.450722    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:37.450733    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:37.450744    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:37.450754    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:37.450771    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:37.450784    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:37.450797    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:37.450809    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:37.450819    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:39.451161    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 3
	I0805 16:20:39.451179    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:39.451277    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:39.452025    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:39.452066    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:39.452089    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:39.452104    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:39.452124    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:39.452141    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:39.452154    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:39.452161    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:39.452167    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:39.452183    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:39.452195    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:39.452202    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:39.452211    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:39.592041    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:39 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:20:39.592070    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:39 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:20:39.592076    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:39 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:20:39.615760    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:39 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:20:41.452210    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 4
	I0805 16:20:41.452225    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:41.452325    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:41.453101    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:41.453153    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:41.453162    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:41.453169    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:41.453178    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:41.453187    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:41.453194    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:41.453200    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:41.453219    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:41.453231    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:41.453241    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:41.453250    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:41.453258    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:43.455148    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 5
	I0805 16:20:43.455166    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:43.455244    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:43.456059    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:43.456103    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:20:43.456115    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:20:43.456122    4640 main.go:141] libmachine: (multinode-985000) DBG | Found match: e2:6:14:d2:13:ae
	I0805 16:20:43.456127    4640 main.go:141] libmachine: (multinode-985000) DBG | IP: 192.169.0.13
	I0805 16:20:43.456181    4640 main.go:141] libmachine: (multinode-985000) Calling .GetConfigRaw
	I0805 16:20:43.456781    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:43.456879    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:43.456972    4640 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 16:20:43.456985    4640 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:20:43.457082    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:43.457144    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:43.457907    4640 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 16:20:43.457917    4640 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 16:20:43.457923    4640 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 16:20:43.457927    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:43.458023    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:43.458126    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:43.458255    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:43.458346    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:43.458472    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:43.458676    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:43.458683    4640 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 16:20:44.513424    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:20:44.513443    4640 main.go:141] libmachine: Detecting the provisioner...
	I0805 16:20:44.513452    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:44.513594    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:44.513694    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.513791    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.513876    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:44.513996    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:44.514158    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:44.514165    4640 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 16:20:44.573082    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 16:20:44.573142    4640 main.go:141] libmachine: found compatible host: buildroot
	I0805 16:20:44.573149    4640 main.go:141] libmachine: Provisioning with buildroot...
	I0805 16:20:44.573155    4640 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:20:44.573299    4640 buildroot.go:166] provisioning hostname "multinode-985000"
	I0805 16:20:44.573311    4640 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:20:44.573416    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:44.573499    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:44.573585    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.573680    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.573795    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:44.573922    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:44.574068    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:44.574076    4640 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-985000 && echo "multinode-985000" | sudo tee /etc/hostname
	I0805 16:20:44.637872    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-985000
	
	I0805 16:20:44.637892    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:44.638029    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:44.638132    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.638218    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.638297    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:44.638429    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:44.638562    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:44.638582    4640 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-985000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-985000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-985000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:20:44.698340    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:20:44.698360    4640 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:20:44.698377    4640 buildroot.go:174] setting up certificates
	I0805 16:20:44.698389    4640 provision.go:84] configureAuth start
	I0805 16:20:44.698397    4640 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:20:44.698544    4640 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:20:44.698658    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:44.698750    4640 provision.go:143] copyHostCerts
	I0805 16:20:44.698781    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:20:44.698850    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:20:44.698858    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:20:44.699001    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:20:44.699205    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:20:44.699246    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:20:44.699250    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:20:44.699341    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:20:44.699482    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:20:44.699528    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:20:44.699533    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:20:44.699615    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:20:44.699756    4640 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.multinode-985000 san=[127.0.0.1 192.169.0.13 localhost minikube multinode-985000]
	I0805 16:20:45.028860    4640 provision.go:177] copyRemoteCerts
	I0805 16:20:45.028920    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:20:45.028938    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:45.029080    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:45.029180    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.029338    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:45.029452    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:20:45.063652    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:20:45.063724    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:20:45.083743    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:20:45.083800    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0805 16:20:45.103791    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:20:45.103863    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:20:45.123716    4640 provision.go:87] duration metric: took 425.312704ms to configureAuth
	I0805 16:20:45.123731    4640 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:20:45.123881    4640 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:20:45.123894    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:45.124028    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:45.124115    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:45.124206    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.124285    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.124381    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:45.124503    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:45.124632    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:45.124639    4640 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:20:45.176256    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:20:45.176269    4640 buildroot.go:70] root file system type: tmpfs
	I0805 16:20:45.176337    4640 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:20:45.176350    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:45.176482    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:45.176580    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.176695    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.176782    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:45.176911    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:45.177045    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:45.177090    4640 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:20:45.240992    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:20:45.241023    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:45.241166    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:45.241270    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.241382    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.241469    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:45.241590    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:45.241743    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:45.241755    4640 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:20:46.765402    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:20:46.765418    4640 main.go:141] libmachine: Checking connection to Docker...
	I0805 16:20:46.765424    4640 main.go:141] libmachine: (multinode-985000) Calling .GetURL
	I0805 16:20:46.765563    4640 main.go:141] libmachine: Docker is up and running!
	I0805 16:20:46.765570    4640 main.go:141] libmachine: Reticulating splines...
	I0805 16:20:46.765575    4640 client.go:171] duration metric: took 14.036043683s to LocalClient.Create
	I0805 16:20:46.765592    4640 start.go:167] duration metric: took 14.036090848s to libmachine.API.Create "multinode-985000"
	I0805 16:20:46.765602    4640 start.go:293] postStartSetup for "multinode-985000" (driver="hyperkit")
	I0805 16:20:46.765609    4640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:20:46.765620    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.765765    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:20:46.765778    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:46.765878    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:46.765972    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.766070    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:46.766168    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:20:46.808597    4640 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:20:46.814840    4640 command_runner.go:130] > NAME=Buildroot
	I0805 16:20:46.814852    4640 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0805 16:20:46.814856    4640 command_runner.go:130] > ID=buildroot
	I0805 16:20:46.814869    4640 command_runner.go:130] > VERSION_ID=2023.02.9
	I0805 16:20:46.814873    4640 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0805 16:20:46.814969    4640 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:20:46.814985    4640 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:20:46.815099    4640 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:20:46.815290    4640 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:20:46.815297    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:20:46.815526    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:20:46.832473    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:20:46.852626    4640 start.go:296] duration metric: took 87.015317ms for postStartSetup
	I0805 16:20:46.852653    4640 main.go:141] libmachine: (multinode-985000) Calling .GetConfigRaw
	I0805 16:20:46.853264    4640 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:20:46.853417    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:20:46.853762    4640 start.go:128] duration metric: took 14.156998155s to createHost
	I0805 16:20:46.853776    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:46.853870    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:46.853964    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.854078    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.854160    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:46.854284    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:46.854405    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:46.854413    4640 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0805 16:20:46.906137    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722900047.071906799
	
	I0805 16:20:46.906149    4640 fix.go:216] guest clock: 1722900047.071906799
	I0805 16:20:46.906154    4640 fix.go:229] Guest: 2024-08-05 16:20:47.071906799 -0700 PDT Remote: 2024-08-05 16:20:46.85377 -0700 PDT m=+14.585721958 (delta=218.136799ms)
	I0805 16:20:46.906178    4640 fix.go:200] guest clock delta is within tolerance: 218.136799ms
	I0805 16:20:46.906182    4640 start.go:83] releasing machines lock for "multinode-985000", held for 14.209573761s
	I0805 16:20:46.906200    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.906321    4640 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:20:46.906429    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.906734    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.906832    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.906917    4640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:20:46.906947    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:46.906977    4640 ssh_runner.go:195] Run: cat /version.json
	I0805 16:20:46.906987    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:46.907036    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:46.907080    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:46.907105    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.907167    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.907190    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:46.907251    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:46.907285    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:20:46.907353    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:20:46.936969    4640 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0805 16:20:46.937263    4640 ssh_runner.go:195] Run: systemctl --version
	I0805 16:20:46.992747    4640 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0805 16:20:46.993626    4640 command_runner.go:130] > systemd 252 (252)
	I0805 16:20:46.993660    4640 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0805 16:20:46.993799    4640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 16:20:46.998949    4640 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0805 16:20:46.998969    4640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:20:46.999002    4640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:20:47.012276    4640 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0805 16:20:47.012544    4640 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:20:47.012556    4640 start.go:495] detecting cgroup driver to use...
	I0805 16:20:47.012657    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:20:47.027593    4640 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0805 16:20:47.027660    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:20:47.035836    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:20:47.044911    4640 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:20:47.044968    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:20:47.053571    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:20:47.061858    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:20:47.070031    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:20:47.078524    4640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:20:47.087870    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:20:47.096303    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:20:47.104482    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:20:47.112756    4640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:20:47.120033    4640 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0805 16:20:47.120127    4640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:20:47.128644    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:47.220387    4640 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:20:47.239567    4640 start.go:495] detecting cgroup driver to use...
	I0805 16:20:47.239642    4640 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:20:47.254939    4640 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0805 16:20:47.255001    4640 command_runner.go:130] > [Unit]
	I0805 16:20:47.255011    4640 command_runner.go:130] > Description=Docker Application Container Engine
	I0805 16:20:47.255015    4640 command_runner.go:130] > Documentation=https://docs.docker.com
	I0805 16:20:47.255020    4640 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0805 16:20:47.255026    4640 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0805 16:20:47.255030    4640 command_runner.go:130] > StartLimitBurst=3
	I0805 16:20:47.255034    4640 command_runner.go:130] > StartLimitIntervalSec=60
	I0805 16:20:47.255037    4640 command_runner.go:130] > [Service]
	I0805 16:20:47.255041    4640 command_runner.go:130] > Type=notify
	I0805 16:20:47.255055    4640 command_runner.go:130] > Restart=on-failure
	I0805 16:20:47.255063    4640 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0805 16:20:47.255073    4640 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0805 16:20:47.255080    4640 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0805 16:20:47.255088    4640 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0805 16:20:47.255094    4640 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0805 16:20:47.255099    4640 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0805 16:20:47.255112    4640 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0805 16:20:47.255120    4640 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0805 16:20:47.255128    4640 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0805 16:20:47.255134    4640 command_runner.go:130] > ExecStart=
	I0805 16:20:47.255164    4640 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0805 16:20:47.255172    4640 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0805 16:20:47.255182    4640 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0805 16:20:47.255189    4640 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0805 16:20:47.255193    4640 command_runner.go:130] > LimitNOFILE=infinity
	I0805 16:20:47.255196    4640 command_runner.go:130] > LimitNPROC=infinity
	I0805 16:20:47.255200    4640 command_runner.go:130] > LimitCORE=infinity
	I0805 16:20:47.255205    4640 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0805 16:20:47.255209    4640 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0805 16:20:47.255212    4640 command_runner.go:130] > TasksMax=infinity
	I0805 16:20:47.255215    4640 command_runner.go:130] > TimeoutStartSec=0
	I0805 16:20:47.255220    4640 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0805 16:20:47.255225    4640 command_runner.go:130] > Delegate=yes
	I0805 16:20:47.255230    4640 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0805 16:20:47.255233    4640 command_runner.go:130] > KillMode=process
	I0805 16:20:47.255236    4640 command_runner.go:130] > [Install]
	I0805 16:20:47.255259    4640 command_runner.go:130] > WantedBy=multi-user.target
	I0805 16:20:47.255324    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:20:47.269909    4640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:20:47.286027    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:20:47.296365    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:20:47.306405    4640 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:20:47.369760    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:20:47.379998    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:20:47.394696    4640 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0805 16:20:47.394951    4640 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:20:47.397850    4640 command_runner.go:130] > /usr/bin/cri-dockerd
	I0805 16:20:47.398038    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:20:47.406063    4640 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:20:47.419537    4640 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:20:47.514227    4640 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:20:47.637079    4640 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:20:47.637156    4640 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:20:47.651314    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:47.748259    4640 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:20:50.076345    4640 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.32806615s)
	I0805 16:20:50.076407    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0805 16:20:50.086580    4640 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0805 16:20:50.099944    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:20:50.110410    4640 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0805 16:20:50.206329    4640 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0805 16:20:50.317239    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:50.417670    4640 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0805 16:20:50.431617    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:20:50.443305    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:50.555307    4640 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0805 16:20:50.610408    4640 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0805 16:20:50.610481    4640 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0805 16:20:50.614751    4640 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0805 16:20:50.614762    4640 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0805 16:20:50.614767    4640 command_runner.go:130] > Device: 0,22	Inode: 806         Links: 1
	I0805 16:20:50.614772    4640 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0805 16:20:50.614775    4640 command_runner.go:130] > Access: 2024-08-05 23:20:50.735793184 +0000
	I0805 16:20:50.614784    4640 command_runner.go:130] > Modify: 2024-08-05 23:20:50.735793184 +0000
	I0805 16:20:50.614789    4640 command_runner.go:130] > Change: 2024-08-05 23:20:50.736793062 +0000
	I0805 16:20:50.614792    4640 command_runner.go:130] >  Birth: -
	I0805 16:20:50.614829    4640 start.go:563] Will wait 60s for crictl version
	I0805 16:20:50.614890    4640 ssh_runner.go:195] Run: which crictl
	I0805 16:20:50.617807    4640 command_runner.go:130] > /usr/bin/crictl
	I0805 16:20:50.617933    4640 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 16:20:50.644026    4640 command_runner.go:130] > Version:  0.1.0
	I0805 16:20:50.644070    4640 command_runner.go:130] > RuntimeName:  docker
	I0805 16:20:50.644117    4640 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0805 16:20:50.644195    4640 command_runner.go:130] > RuntimeApiVersion:  v1
	I0805 16:20:50.645396    4640 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0805 16:20:50.645460    4640 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:20:50.661131    4640 command_runner.go:130] > 27.1.1
	I0805 16:20:50.662194    4640 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:20:50.677860    4640 command_runner.go:130] > 27.1.1
	I0805 16:20:50.700872    4640 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0805 16:20:50.700922    4640 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:20:50.701316    4640 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0805 16:20:50.706154    4640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:20:50.715610    4640 kubeadm.go:883] updating cluster {Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 16:20:50.715677    4640 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:20:50.715736    4640 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:20:50.733572    4640 docker.go:685] Got preloaded images: 
	I0805 16:20:50.733584    4640 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0805 16:20:50.733634    4640 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 16:20:50.741005    4640 command_runner.go:139] > {"Repositories":{}}
	I0805 16:20:50.741090    4640 ssh_runner.go:195] Run: which lz4
	I0805 16:20:50.744527    4640 command_runner.go:130] > /usr/bin/lz4
	I0805 16:20:50.744558    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0805 16:20:50.744692    4640 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0805 16:20:50.747718    4640 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 16:20:50.747836    4640 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 16:20:50.747851    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0805 16:20:51.865752    4640 docker.go:649] duration metric: took 1.121114736s to copy over tarball
	I0805 16:20:51.865833    4640 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 16:20:54.241811    4640 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.375959074s)
	I0805 16:20:54.241825    4640 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 16:20:54.267125    4640 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 16:20:54.275283    4640 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.3":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.3":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.3":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d2
89d99da794784d1"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.3":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0805 16:20:54.275373    4640 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0805 16:20:54.288931    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:54.386395    4640 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:20:56.795159    4640 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.408741228s)
	I0805 16:20:56.795248    4640 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:20:56.808093    4640 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0805 16:20:56.808107    4640 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0805 16:20:56.808111    4640 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0805 16:20:56.808116    4640 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0805 16:20:56.808120    4640 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0805 16:20:56.808123    4640 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0805 16:20:56.808128    4640 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0805 16:20:56.808135    4640 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:20:56.809018    4640 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0805 16:20:56.809035    4640 cache_images.go:84] Images are preloaded, skipping loading
	I0805 16:20:56.809048    4640 kubeadm.go:934] updating node { 192.169.0.13 8443 v1.30.3 docker true true} ...
	I0805 16:20:56.809127    4640 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-985000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 16:20:56.809195    4640 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0805 16:20:56.847007    4640 command_runner.go:130] > cgroupfs
	I0805 16:20:56.847610    4640 cni.go:84] Creating CNI manager for ""
	I0805 16:20:56.847620    4640 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 16:20:56.847630    4640 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 16:20:56.847650    4640 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.13 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-985000 NodeName:multinode-985000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 16:20:56.847744    4640 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-985000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 16:20:56.847807    4640 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 16:20:56.855919    4640 command_runner.go:130] > kubeadm
	I0805 16:20:56.855931    4640 command_runner.go:130] > kubectl
	I0805 16:20:56.855934    4640 command_runner.go:130] > kubelet
	I0805 16:20:56.855959    4640 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 16:20:56.856010    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 16:20:56.863284    4640 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0805 16:20:56.876753    4640 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 16:20:56.890292    4640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0805 16:20:56.904628    4640 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I0805 16:20:56.907711    4640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:20:56.917108    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:57.013172    4640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:20:57.028650    4640 certs.go:68] Setting up /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000 for IP: 192.169.0.13
	I0805 16:20:57.028663    4640 certs.go:194] generating shared ca certs ...
	I0805 16:20:57.028674    4640 certs.go:226] acquiring lock for ca certs: {Name:mkb83e058d89c7d4e66f4136f377a3c305b13735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.028863    4640 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key
	I0805 16:20:57.028935    4640 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key
	I0805 16:20:57.028946    4640 certs.go:256] generating profile certs ...
	I0805 16:20:57.028995    4640 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key
	I0805 16:20:57.029007    4640 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt with IP's: []
	I0805 16:20:57.088127    4640 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt ...
	I0805 16:20:57.088142    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt: {Name:mkb7087fa165ae496621b10df42dfd2f8603360a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.088531    4640 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key ...
	I0805 16:20:57.088540    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key: {Name:mk37e627de9c39a2300d317d721ebf92a202a17e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.088775    4640 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec
	I0805 16:20:57.088790    4640 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt.5b7978ec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.13]
	I0805 16:20:57.189318    4640 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt.5b7978ec ...
	I0805 16:20:57.189336    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt.5b7978ec: {Name:mkb4501af4f6db766eb719de2f42fc564a23d2d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.189653    4640 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec ...
	I0805 16:20:57.189669    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec: {Name:mke641ddecfc5629bb592a5b6321d446ed3b31bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.189903    4640 certs.go:381] copying /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt.5b7978ec -> /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt
	I0805 16:20:57.190140    4640 certs.go:385] copying /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec -> /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key
	I0805 16:20:57.190318    4640 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key
	I0805 16:20:57.190336    4640 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt with IP's: []
	I0805 16:20:57.386717    4640 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt ...
	I0805 16:20:57.386733    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt: {Name:mk486344c8c5b8383e5349f68a995b553e8d31c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.387043    4640 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key ...
	I0805 16:20:57.387052    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key: {Name:mk2b24e1a5e962e12395adf21e4f6ad64901ee0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.387278    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 16:20:57.387306    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 16:20:57.387325    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 16:20:57.387349    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 16:20:57.387368    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 16:20:57.387391    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 16:20:57.387411    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 16:20:57.387432    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 16:20:57.387531    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem (1338 bytes)
	W0805 16:20:57.387583    4640 certs.go:480] ignoring /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678_empty.pem, impossibly tiny 0 bytes
	I0805 16:20:57.387591    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 16:20:57.387621    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem (1082 bytes)
	I0805 16:20:57.387656    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem (1123 bytes)
	I0805 16:20:57.387684    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem (1675 bytes)
	I0805 16:20:57.387747    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:20:57.387781    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem -> /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.387803    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.387822    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.388188    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 16:20:57.408800    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 16:20:57.429927    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 16:20:57.449924    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0805 16:20:57.470736    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0805 16:20:57.490564    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 16:20:57.511342    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 16:20:57.531190    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 16:20:57.551984    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem --> /usr/share/ca-certificates/1678.pem (1338 bytes)
	I0805 16:20:57.571601    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /usr/share/ca-certificates/16782.pem (1708 bytes)
	I0805 16:20:57.592369    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 16:20:57.611866    4640 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 16:20:57.626527    4640 ssh_runner.go:195] Run: openssl version
	I0805 16:20:57.630504    4640 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0805 16:20:57.630711    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1678.pem && ln -fs /usr/share/ca-certificates/1678.pem /etc/ssl/certs/1678.pem"
	I0805 16:20:57.638913    4640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.642115    4640 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  5 22:58 /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.642280    4640 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 22:58 /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.642315    4640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.646345    4640 command_runner.go:130] > 51391683
	I0805 16:20:57.646544    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1678.pem /etc/ssl/certs/51391683.0"
	I0805 16:20:57.654953    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16782.pem && ln -fs /usr/share/ca-certificates/16782.pem /etc/ssl/certs/16782.pem"
	I0805 16:20:57.663842    4640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.667242    4640 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  5 22:58 /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.667258    4640 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 22:58 /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.667300    4640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.671438    4640 command_runner.go:130] > 3ec20f2e
	I0805 16:20:57.671648    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16782.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 16:20:57.679692    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 16:20:57.688061    4640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.691411    4640 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.691493    4640 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.691531    4640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.695572    4640 command_runner.go:130] > b5213941
	I0805 16:20:57.695754    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 16:20:57.704703    4640 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 16:20:57.707752    4640 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 16:20:57.707872    4640 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 16:20:57.707921    4640 kubeadm.go:392] StartCluster: {Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:20:57.708054    4640 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 16:20:57.720408    4640 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 16:20:57.731114    4640 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0805 16:20:57.731128    4640 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0805 16:20:57.731133    4640 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0805 16:20:57.731194    4640 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 16:20:57.739645    4640 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 16:20:57.751095    4640 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0805 16:20:57.751108    4640 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0805 16:20:57.751113    4640 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0805 16:20:57.751120    4640 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 16:20:57.751266    4640 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 16:20:57.751273    4640 kubeadm.go:157] found existing configuration files:
	
	I0805 16:20:57.751324    4640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 16:20:57.759086    4640 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 16:20:57.759185    4640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 16:20:57.759233    4640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 16:20:57.769060    4640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 16:20:57.778103    4640 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 16:20:57.778143    4640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 16:20:57.778190    4640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 16:20:57.786612    4640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 16:20:57.794733    4640 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 16:20:57.794754    4640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 16:20:57.794796    4640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 16:20:57.802671    4640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 16:20:57.810242    4640 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 16:20:57.810264    4640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 16:20:57.810299    4640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 16:20:57.818339    4640 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 16:20:57.890449    4640 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0805 16:20:57.890461    4640 command_runner.go:130] > [init] Using Kubernetes version: v1.30.3
	I0805 16:20:57.890501    4640 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 16:20:57.890507    4640 command_runner.go:130] > [preflight] Running pre-flight checks
	I0805 16:20:57.984851    4640 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 16:20:57.984855    4640 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 16:20:57.984956    4640 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 16:20:57.984962    4640 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 16:20:57.985041    4640 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 16:20:57.985038    4640 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 16:20:58.152965    4640 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 16:20:58.152995    4640 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 16:20:58.175785    4640 out.go:204]   - Generating certificates and keys ...
	I0805 16:20:58.175840    4640 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0805 16:20:58.175851    4640 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 16:20:58.175914    4640 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0805 16:20:58.175920    4640 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 16:20:58.229002    4640 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0805 16:20:58.229016    4640 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0805 16:20:58.322701    4640 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0805 16:20:58.322717    4640 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0805 16:20:58.394063    4640 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0805 16:20:58.394077    4640 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0805 16:20:58.601975    4640 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0805 16:20:58.601995    4640 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0805 16:20:58.821056    4640 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0805 16:20:58.821065    4640 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0805 16:20:58.821204    4640 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-985000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0805 16:20:58.821214    4640 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-985000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0805 16:20:59.150811    4640 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0805 16:20:59.150817    4640 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0805 16:20:59.151036    4640 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-985000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0805 16:20:59.151046    4640 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-985000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0805 16:20:59.206073    4640 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0805 16:20:59.206088    4640 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0805 16:20:59.294956    4640 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0805 16:20:59.294966    4640 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0805 16:20:59.348591    4640 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0805 16:20:59.348602    4640 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0805 16:20:59.348788    4640 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 16:20:59.348797    4640 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 16:20:59.511379    4640 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 16:20:59.511395    4640 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 16:20:59.789652    4640 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 16:20:59.789666    4640 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 16:20:59.965508    4640 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 16:20:59.965517    4640 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 16:21:00.208268    4640 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 16:21:00.208284    4640 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 16:21:00.402575    4640 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 16:21:00.402582    4640 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 16:21:00.409122    4640 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 16:21:00.409137    4640 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 16:21:00.410639    4640 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 16:21:00.410652    4640 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 16:21:00.430944    4640 out.go:204]   - Booting up control plane ...
	I0805 16:21:00.431017    4640 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 16:21:00.431032    4640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 16:21:00.431106    4640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 16:21:00.431106    4640 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 16:21:00.431174    4640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 16:21:00.431182    4640 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 16:21:00.431274    4640 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 16:21:00.431286    4640 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 16:21:00.431361    4640 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 16:21:00.431369    4640 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 16:21:00.431399    4640 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 16:21:00.431405    4640 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0805 16:21:00.540991    4640 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 16:21:00.541004    4640 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 16:21:00.541076    4640 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 16:21:00.541081    4640 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 16:21:01.042556    4640 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.719164ms
	I0805 16:21:01.042573    4640 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.719164ms
	I0805 16:21:01.042632    4640 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 16:21:01.042639    4640 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 16:21:05.541995    4640 kubeadm.go:310] [api-check] The API server is healthy after 4.502407968s
	I0805 16:21:05.542014    4640 command_runner.go:130] > [api-check] The API server is healthy after 4.502407968s
	I0805 16:21:05.551474    4640 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 16:21:05.551486    4640 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 16:21:05.558278    4640 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 16:21:05.558284    4640 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 16:21:05.572116    4640 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 16:21:05.572130    4640 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0805 16:21:05.572281    4640 kubeadm.go:310] [mark-control-plane] Marking the node multinode-985000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 16:21:05.572292    4640 command_runner.go:130] > [mark-control-plane] Marking the node multinode-985000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 16:21:05.579214    4640 kubeadm.go:310] [bootstrap-token] Using token: 0mwls8.ribzsy6ooov2flu0
	I0805 16:21:05.579225    4640 command_runner.go:130] > [bootstrap-token] Using token: 0mwls8.ribzsy6ooov2flu0
	I0805 16:21:05.613851    4640 out.go:204]   - Configuring RBAC rules ...
	I0805 16:21:05.613974    4640 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 16:21:05.613988    4640 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 16:21:05.655317    4640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 16:21:05.655329    4640 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 16:21:05.659733    4640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 16:21:05.659737    4640 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 16:21:05.661608    4640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 16:21:05.661619    4640 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 16:21:05.663605    4640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 16:21:05.663612    4640 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 16:21:05.665771    4640 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 16:21:05.665778    4640 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 16:21:05.947572    4640 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 16:21:05.947585    4640 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 16:21:06.357765    4640 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 16:21:06.357776    4640 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0805 16:21:06.946930    4640 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 16:21:06.946942    4640 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0805 16:21:06.947937    4640 kubeadm.go:310] 
	I0805 16:21:06.947989    4640 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 16:21:06.947996    4640 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0805 16:21:06.948000    4640 kubeadm.go:310] 
	I0805 16:21:06.948071    4640 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 16:21:06.948080    4640 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0805 16:21:06.948088    4640 kubeadm.go:310] 
	I0805 16:21:06.948121    4640 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 16:21:06.948125    4640 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0805 16:21:06.948179    4640 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 16:21:06.948187    4640 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 16:21:06.948229    4640 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 16:21:06.948234    4640 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 16:21:06.948237    4640 kubeadm.go:310] 
	I0805 16:21:06.948284    4640 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 16:21:06.948302    4640 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0805 16:21:06.948309    4640 kubeadm.go:310] 
	I0805 16:21:06.948354    4640 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 16:21:06.948367    4640 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 16:21:06.948375    4640 kubeadm.go:310] 
	I0805 16:21:06.948414    4640 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 16:21:06.948418    4640 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0805 16:21:06.948479    4640 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 16:21:06.948488    4640 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 16:21:06.948558    4640 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 16:21:06.948564    4640 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 16:21:06.948570    4640 kubeadm.go:310] 
	I0805 16:21:06.948633    4640 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 16:21:06.948638    4640 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0805 16:21:06.948701    4640 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 16:21:06.948708    4640 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0805 16:21:06.948715    4640 kubeadm.go:310] 
	I0805 16:21:06.948788    4640 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0mwls8.ribzsy6ooov2flu0 \
	I0805 16:21:06.948795    4640 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 0mwls8.ribzsy6ooov2flu0 \
	I0805 16:21:06.948879    4640 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:524477c6809305b6c0c2d082a15767bdfc04953bf05f4ba28f6a5db30aba8adf \
	I0805 16:21:06.948886    4640 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:524477c6809305b6c0c2d082a15767bdfc04953bf05f4ba28f6a5db30aba8adf \
	I0805 16:21:06.948905    4640 kubeadm.go:310] 	--control-plane 
	I0805 16:21:06.948911    4640 command_runner.go:130] > 	--control-plane 
	I0805 16:21:06.948916    4640 kubeadm.go:310] 
	I0805 16:21:06.948980    4640 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 16:21:06.948984    4640 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0805 16:21:06.948987    4640 kubeadm.go:310] 
	I0805 16:21:06.949052    4640 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0mwls8.ribzsy6ooov2flu0 \
	I0805 16:21:06.949057    4640 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 0mwls8.ribzsy6ooov2flu0 \
	I0805 16:21:06.949136    4640 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:524477c6809305b6c0c2d082a15767bdfc04953bf05f4ba28f6a5db30aba8adf 
	I0805 16:21:06.949141    4640 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:524477c6809305b6c0c2d082a15767bdfc04953bf05f4ba28f6a5db30aba8adf 
	I0805 16:21:06.949613    4640 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 16:21:06.949621    4640 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 16:21:06.949644    4640 cni.go:84] Creating CNI manager for ""
	I0805 16:21:06.949649    4640 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 16:21:06.972147    4640 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0805 16:21:07.030449    4640 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0805 16:21:07.036220    4640 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0805 16:21:07.036233    4640 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0805 16:21:07.036239    4640 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0805 16:21:07.036249    4640 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0805 16:21:07.036254    4640 command_runner.go:130] > Access: 2024-08-05 23:20:43.694299549 +0000
	I0805 16:21:07.036259    4640 command_runner.go:130] > Modify: 2024-07-29 16:10:03.000000000 +0000
	I0805 16:21:07.036264    4640 command_runner.go:130] > Change: 2024-08-05 23:20:41.058596444 +0000
	I0805 16:21:07.036266    4640 command_runner.go:130] >  Birth: -
	I0805 16:21:07.036368    4640 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0805 16:21:07.036375    4640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0805 16:21:07.050414    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0805 16:21:07.243070    4640 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0805 16:21:07.246445    4640 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0805 16:21:07.250670    4640 command_runner.go:130] > serviceaccount/kindnet created
	I0805 16:21:07.255971    4640 command_runner.go:130] > daemonset.apps/kindnet created
	I0805 16:21:07.257424    4640 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 16:21:07.257500    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-985000 minikube.k8s.io/updated_at=2024_08_05T16_21_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4 minikube.k8s.io/name=multinode-985000 minikube.k8s.io/primary=true
	I0805 16:21:07.257502    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:07.266956    4640 command_runner.go:130] > -16
	I0805 16:21:07.267023    4640 ops.go:34] apiserver oom_adj: -16
	I0805 16:21:07.390396    4640 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0805 16:21:07.392070    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:07.400579    4640 command_runner.go:130] > node/multinode-985000 labeled
	I0805 16:21:07.456213    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:07.893323    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:07.956622    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:08.392391    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:08.450793    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:08.892411    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:08.950456    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:09.393238    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:09.450291    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:09.892156    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:09.951159    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:10.393019    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:10.451734    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:10.893100    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:10.954360    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:11.393009    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:11.452879    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:11.894187    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:11.953480    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:12.392194    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:12.452444    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:12.894265    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:12.955367    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:13.392882    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:13.455680    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:13.892568    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:13.950195    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:14.393254    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:14.452940    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:14.892187    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:14.948447    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:15.392762    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:15.451815    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:15.892531    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:15.952781    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:16.393008    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:16.454659    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:16.892423    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:16.957989    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:17.392489    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:17.452653    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:17.892453    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:17.953809    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:18.392692    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:18.450726    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:18.893940    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:18.957266    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:19.393402    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:19.452345    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:19.892761    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:19.952524    4640 command_runner.go:130] > NAME      SECRETS   AGE
	I0805 16:21:19.952537    4640 command_runner.go:130] > default   0         1s
	I0805 16:21:19.952551    4640 kubeadm.go:1113] duration metric: took 12.695106906s to wait for elevateKubeSystemPrivileges
	I0805 16:21:19.952568    4640 kubeadm.go:394] duration metric: took 22.244643678s to StartCluster
	I0805 16:21:19.952584    4640 settings.go:142] acquiring lock: {Name:mk564a817a54ecf2aef16a4d2309e85208c0231f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:21:19.952678    4640 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:21:19.953130    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/kubeconfig: {Name:mk2a0d8b4d330b3c26432fc65d015ddf98a9cc93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:21:19.953387    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0805 16:21:19.953391    4640 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:21:19.953437    4640 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 16:21:19.953474    4640 addons.go:69] Setting storage-provisioner=true in profile "multinode-985000"
	I0805 16:21:19.953501    4640 addons.go:234] Setting addon storage-provisioner=true in "multinode-985000"
	I0805 16:21:19.953507    4640 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:21:19.953501    4640 addons.go:69] Setting default-storageclass=true in profile "multinode-985000"
	I0805 16:21:19.953520    4640 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:21:19.953542    4640 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-985000"
	I0805 16:21:19.953772    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.953787    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.953870    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.953897    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.962985    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52500
	I0805 16:21:19.963341    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52502
	I0805 16:21:19.963365    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.963645    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.963722    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.963735    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.963997    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.964004    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.964027    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.964249    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.964372    4640 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:21:19.964430    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.964458    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:19.964465    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.964535    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:21:19.966651    4640 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:21:19.966874    4640 kapi.go:59] client config for multinode-985000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xed05060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:21:19.967275    4640 cert_rotation.go:137] Starting client certificate rotation controller
	I0805 16:21:19.967411    4640 addons.go:234] Setting addon default-storageclass=true in "multinode-985000"
	I0805 16:21:19.967434    4640 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:21:19.967665    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.967688    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.973226    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52504
	I0805 16:21:19.973568    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.973922    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.973942    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.974163    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.974282    4640 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:21:19.974363    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:19.974444    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:21:19.975405    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:21:19.975491    4640 out.go:177] * Verifying Kubernetes components...
	I0805 16:21:19.976182    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52506
	I0805 16:21:19.976461    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.976795    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.976812    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.976999    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.977392    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.977409    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.986027    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52508
	I0805 16:21:19.986361    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.986712    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.986741    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.986959    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.987071    4640 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:21:19.987149    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:19.987227    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:21:19.988179    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:21:19.988299    4640 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 16:21:19.988307    4640 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 16:21:19.988315    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:21:19.988395    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:21:19.988484    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:21:19.988568    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:21:19.988639    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:21:20.032241    4640 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:21:20.032361    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:21:20.069496    4640 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 16:21:20.069510    4640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 16:21:20.069530    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:21:20.069717    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:21:20.069824    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:21:20.069935    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:21:20.070041    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:21:20.084762    4640 command_runner.go:130] > apiVersion: v1
	I0805 16:21:20.084775    4640 command_runner.go:130] > data:
	I0805 16:21:20.084779    4640 command_runner.go:130] >   Corefile: |
	I0805 16:21:20.084782    4640 command_runner.go:130] >     .:53 {
	I0805 16:21:20.084785    4640 command_runner.go:130] >         errors
	I0805 16:21:20.084790    4640 command_runner.go:130] >         health {
	I0805 16:21:20.084794    4640 command_runner.go:130] >            lameduck 5s
	I0805 16:21:20.084796    4640 command_runner.go:130] >         }
	I0805 16:21:20.084812    4640 command_runner.go:130] >         ready
	I0805 16:21:20.084822    4640 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0805 16:21:20.084829    4640 command_runner.go:130] >            pods insecure
	I0805 16:21:20.084833    4640 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0805 16:21:20.084841    4640 command_runner.go:130] >            ttl 30
	I0805 16:21:20.084853    4640 command_runner.go:130] >         }
	I0805 16:21:20.084863    4640 command_runner.go:130] >         prometheus :9153
	I0805 16:21:20.084868    4640 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0805 16:21:20.084880    4640 command_runner.go:130] >            max_concurrent 1000
	I0805 16:21:20.084884    4640 command_runner.go:130] >         }
	I0805 16:21:20.084887    4640 command_runner.go:130] >         cache 30
	I0805 16:21:20.084898    4640 command_runner.go:130] >         loop
	I0805 16:21:20.084902    4640 command_runner.go:130] >         reload
	I0805 16:21:20.084905    4640 command_runner.go:130] >         loadbalance
	I0805 16:21:20.084908    4640 command_runner.go:130] >     }
	I0805 16:21:20.084911    4640 command_runner.go:130] > kind: ConfigMap
	I0805 16:21:20.084914    4640 command_runner.go:130] > metadata:
	I0805 16:21:20.084921    4640 command_runner.go:130] >   creationTimestamp: "2024-08-05T23:21:06Z"
	I0805 16:21:20.084926    4640 command_runner.go:130] >   name: coredns
	I0805 16:21:20.084929    4640 command_runner.go:130] >   namespace: kube-system
	I0805 16:21:20.084933    4640 command_runner.go:130] >   resourceVersion: "266"
	I0805 16:21:20.084937    4640 command_runner.go:130] >   uid: 5057af03-8824-4e67-a4b6-ef90c1ded7ce
	I0805 16:21:20.085056    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0805 16:21:20.184335    4640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:21:20.203408    4640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 16:21:20.278639    4640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 16:21:20.507141    4640 command_runner.go:130] > configmap/coredns replaced
	I0805 16:21:20.511660    4640 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0805 16:21:20.511929    4640 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:21:20.511932    4640 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:21:20.512124    4640 kapi.go:59] client config for multinode-985000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xed05060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:21:20.512125    4640 kapi.go:59] client config for multinode-985000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xed05060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:21:20.512341    4640 node_ready.go:35] waiting up to 6m0s for node "multinode-985000" to be "Ready" ...
	I0805 16:21:20.512409    4640 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0805 16:21:20.512416    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.512423    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.512424    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:20.512428    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.512430    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.512438    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.512446    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.520076    4640 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0805 16:21:20.520087    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.520092    4640 round_trippers.go:580]     Audit-Id: 304f14c4-a466-4fb6-b401-b28f4df4dfa1
	I0805 16:21:20.520095    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.520103    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.520107    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.520111    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.520113    4640 round_trippers.go:580]     Content-Length: 291
	I0805 16:21:20.520117    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.521443    4640 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0805 16:21:20.521456    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.521464    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.521474    4640 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7bdcac2f-ecae-4bb5-9dd4-4f2479d63a63","resourceVersion":"381","creationTimestamp":"2024-08-05T23:21:06Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0805 16:21:20.521479    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.521487    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.521502    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.521511    4640 round_trippers.go:580]     Audit-Id: bcd9e393-6b08-4ffb-a73b-6e7c430f0212
	I0805 16:21:20.521518    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.521831    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:20.521865    4640 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7bdcac2f-ecae-4bb5-9dd4-4f2479d63a63","resourceVersion":"381","creationTimestamp":"2024-08-05T23:21:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0805 16:21:20.521904    4640 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0805 16:21:20.521914    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.521921    4640 round_trippers.go:473]     Content-Type: application/json
	I0805 16:21:20.521930    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.521935    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.530726    4640 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0805 16:21:20.530739    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.530744    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.530748    4640 round_trippers.go:580]     Content-Length: 291
	I0805 16:21:20.530751    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.530754    4640 round_trippers.go:580]     Audit-Id: ba15a3b2-b69b-473e-a331-81e01385ad47
	I0805 16:21:20.530756    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.530758    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.530761    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.530773    4640 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7bdcac2f-ecae-4bb5-9dd4-4f2479d63a63","resourceVersion":"383","creationTimestamp":"2024-08-05T23:21:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0805 16:21:20.588534    4640 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0805 16:21:20.588563    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.588570    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.588737    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.588752    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.588765    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.588764    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.588772    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.588919    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.588920    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.588931    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.589012    4640 round_trippers.go:463] GET https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses
	I0805 16:21:20.589020    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.589028    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.589034    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.597496    4640 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0805 16:21:20.597508    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.597513    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.597518    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.597521    4640 round_trippers.go:580]     Content-Length: 1273
	I0805 16:21:20.597523    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.597525    4640 round_trippers.go:580]     Audit-Id: d7394cfc-1eb3-4623-8a7f-a5088a0398c8
	I0805 16:21:20.597527    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.597530    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.597844    4640 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"391"},"items":[{"metadata":{"name":"standard","uid":"34b9c98b-1b12-420a-8576-fd00c496f57b","resourceVersion":"387","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0805 16:21:20.598117    4640 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"34b9c98b-1b12-420a-8576-fd00c496f57b","resourceVersion":"387","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0805 16:21:20.598145    4640 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0805 16:21:20.598150    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.598157    4640 round_trippers.go:473]     Content-Type: application/json
	I0805 16:21:20.598166    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.598171    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.619819    4640 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0805 16:21:20.619836    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.619842    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.619846    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.619849    4640 round_trippers.go:580]     Content-Length: 1220
	I0805 16:21:20.619852    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.619855    4640 round_trippers.go:580]     Audit-Id: 299d4cc8-0cb5-4dd5-80b3-5d54592ecd90
	I0805 16:21:20.619859    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.619861    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.619898    4640 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"34b9c98b-1b12-420a-8576-fd00c496f57b","resourceVersion":"387","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0805 16:21:20.619983    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.619992    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.620141    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.620153    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.620166    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.750372    4640 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0805 16:21:20.753871    4640 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0805 16:21:20.759257    4640 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0805 16:21:20.767575    4640 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0805 16:21:20.774745    4640 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0805 16:21:20.786454    4640 command_runner.go:130] > pod/storage-provisioner created
	I0805 16:21:20.787838    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.787851    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.788087    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.788087    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.788098    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.788109    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.788117    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.788261    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.788280    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.788280    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.811467    4640 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0805 16:21:20.871433    4640 addons.go:510] duration metric: took 917.995637ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0805 16:21:21.014507    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:21.014532    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:21.014545    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:21.014553    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:21.014605    4640 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0805 16:21:21.014619    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:21.014631    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:21.014638    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:21.017465    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:21.017464    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:21.017480    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:21.017492    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:21.017492    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:21.017496    4640 round_trippers.go:580]     Content-Length: 291
	I0805 16:21:21.017502    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:21 GMT
	I0805 16:21:21.017504    4640 round_trippers.go:580]     Audit-Id: fb264fed-80ee-469b-a34e-7b1e8460f94b
	I0805 16:21:21.017506    4640 round_trippers.go:580]     Audit-Id: c9362211-8dfc-4385-87db-76c6486df53e
	I0805 16:21:21.017512    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:21.017513    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:21.017518    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:21.017519    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:21.017522    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:21.017524    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:21.017529    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:21.017545    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:21 GMT
	I0805 16:21:21.017616    4640 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7bdcac2f-ecae-4bb5-9dd4-4f2479d63a63","resourceVersion":"395","creationTimestamp":"2024-08-05T23:21:06Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0805 16:21:21.017684    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:21.017735    4640 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-985000" context rescaled to 1 replicas
	I0805 16:21:21.514170    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:21.514200    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:21.514219    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:21.514226    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:21.516804    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:21.516819    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:21.516826    4640 round_trippers.go:580]     Audit-Id: 9396255c-231d-48cb-a53f-22663307b969
	I0805 16:21:21.516830    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:21.516834    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:21.516839    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:21.516849    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:21.516854    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:21 GMT
	I0805 16:21:21.516951    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:22.013275    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:22.013299    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:22.013311    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:22.013319    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:22.016138    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:22.016155    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:22.016163    4640 round_trippers.go:580]     Audit-Id: cc869aef-9ab4-4a7f-8835-cce2afa76dd9
	I0805 16:21:22.016168    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:22.016175    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:22.016182    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:22.016187    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:22.016193    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:22 GMT
	I0805 16:21:22.016497    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:22.512546    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:22.512561    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:22.512567    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:22.512572    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:22.515381    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:22.515393    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:22.515401    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:22.515407    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:22.515412    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:22.515416    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:22 GMT
	I0805 16:21:22.515420    4640 round_trippers.go:580]     Audit-Id: e7d470a0-7df5-4d85-9bb5-cbf15cfa989f
	I0805 16:21:22.515423    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:22.515634    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:22.515838    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:23.012594    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:23.012606    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:23.012612    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:23.012616    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:23.014085    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:23.014095    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:23.014101    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:23.014104    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:23.014107    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:23.014109    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:23.014113    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:23 GMT
	I0805 16:21:23.014116    4640 round_trippers.go:580]     Audit-Id: e12d5034-3bd9-498b-844e-12133805ded9
	I0805 16:21:23.014306    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:23.513150    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:23.513163    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:23.513168    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:23.513172    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:23.514595    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:23.514604    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:23.514610    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:23.514614    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:23.514617    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:23.514619    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:23.514622    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:23 GMT
	I0805 16:21:23.514635    4640 round_trippers.go:580]     Audit-Id: 2bc52e3b-1575-453f-87fa-51f4301a9426
	I0805 16:21:23.514871    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:24.012814    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:24.012826    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:24.012832    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:24.012835    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:24.014366    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:24.014379    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:24.014384    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:24.014388    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:24.014406    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:24.014411    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:24.014414    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:24 GMT
	I0805 16:21:24.014417    4640 round_trippers.go:580]     Audit-Id: f14d8611-e5e1-45fe-92f3-95559148c71b
	I0805 16:21:24.014572    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:24.513607    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:24.513620    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:24.513626    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:24.513629    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:24.515210    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:24.515220    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:24.515242    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:24.515253    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:24.515260    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:24.515264    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:24.515268    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:24 GMT
	I0805 16:21:24.515271    4640 round_trippers.go:580]     Audit-Id: 0a897d84-d437-4212-b36d-e414fedf55d4
	I0805 16:21:24.515427    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:25.013253    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:25.013272    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:25.013283    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:25.013321    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:25.015275    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:25.015308    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:25.015317    4640 round_trippers.go:580]     Audit-Id: ced7b45c-a072-4322-89ab-d0cc21ddfb1d
	I0805 16:21:25.015322    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:25.015325    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:25.015328    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:25.015332    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:25.015336    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:25 GMT
	I0805 16:21:25.015627    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:25.015849    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:25.512881    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:25.512902    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:25.512914    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:25.512920    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:25.515502    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:25.515517    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:25.515524    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:25.515529    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:25.515534    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:25.515538    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:25.515542    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:25 GMT
	I0805 16:21:25.515545    4640 round_trippers.go:580]     Audit-Id: dd6b59c1-dde3-4d67-b446-8823ad717d4f
	I0805 16:21:25.515665    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:26.013787    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:26.013811    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:26.013824    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:26.013830    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:26.016420    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:26.016440    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:26.016463    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:26 GMT
	I0805 16:21:26.016470    4640 round_trippers.go:580]     Audit-Id: 19939705-2879-44e6-830c-0c86394087ed
	I0805 16:21:26.016473    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:26.016485    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:26.016490    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:26.016494    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:26.016965    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:26.512523    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:26.512536    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:26.512541    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:26.512544    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:26.514158    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:26.514167    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:26.514172    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:26.514176    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:26.514179    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:26.514182    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:26 GMT
	I0805 16:21:26.514184    4640 round_trippers.go:580]     Audit-Id: f2346665-2701-41e1-94b0-41a70aa2f170
	I0805 16:21:26.514187    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:26.514489    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:27.013107    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:27.013136    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:27.013148    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:27.013155    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:27.015615    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:27.015632    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:27.015639    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:27 GMT
	I0805 16:21:27.015655    4640 round_trippers.go:580]     Audit-Id: 6abee22d-c1db-48e9-99db-e07791ed571f
	I0805 16:21:27.015661    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:27.015664    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:27.015667    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:27.015672    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:27.015747    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:27.015996    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:27.513549    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:27.513570    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:27.513582    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:27.513589    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:27.516173    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:27.516189    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:27.516197    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:27.516200    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:27.516204    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:27.516209    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:27 GMT
	I0805 16:21:27.516212    4640 round_trippers.go:580]     Audit-Id: a227585b-ae23-4bd1-b1dc-643eadd970cc
	I0805 16:21:27.516215    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:27.516416    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:28.014104    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:28.014132    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:28.014143    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:28.014159    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:28.016690    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:28.016705    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:28.016713    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:28.016717    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:28.016721    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:28.016725    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:28.016728    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:28 GMT
	I0805 16:21:28.016731    4640 round_trippers.go:580]     Audit-Id: 0d14831c-cc1f-41a9-a252-85e191b9594d
	I0805 16:21:28.016834    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:28.512703    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:28.512726    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:28.512739    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:28.512747    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:28.515176    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:28.515190    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:28.515197    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:28 GMT
	I0805 16:21:28.515201    4640 round_trippers.go:580]     Audit-Id: 6af459f8-bb08-43bf-ac7f-51ccacd5d664
	I0805 16:21:28.515206    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:28.515211    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:28.515215    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:28.515219    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:28.515378    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:29.013324    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:29.013354    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:29.013360    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:29.013364    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:29.014793    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:29.014804    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:29.014809    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:29.014813    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:29 GMT
	I0805 16:21:29.014817    4640 round_trippers.go:580]     Audit-Id: 2e50ff34-0c55-4136-b537-eee73f73706d
	I0805 16:21:29.014819    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:29.014822    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:29.014826    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:29.015098    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:29.513802    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:29.513832    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:29.513844    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:29.513852    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:29.516479    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:29.516496    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:29.516504    4640 round_trippers.go:580]     Audit-Id: bcbc3920-26b4-45f4-b91a-ce0e3dc11770
	I0805 16:21:29.516529    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:29.516538    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:29.516544    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:29.516549    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:29.516554    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:29 GMT
	I0805 16:21:29.516682    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:29.516938    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:30.013325    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:30.013349    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:30.013436    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:30.013448    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:30.016209    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:30.016222    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:30.016228    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:30.016233    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:30 GMT
	I0805 16:21:30.016238    4640 round_trippers.go:580]     Audit-Id: fb0bd3e0-89c3-4c77-a27d-be315cab22b7
	I0805 16:21:30.016242    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:30.016277    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:30.016283    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:30.016477    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:30.514344    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:30.514386    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:30.514482    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:30.514494    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:30.518828    4640 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 16:21:30.518860    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:30.518870    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:30.518876    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:30.518882    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:30 GMT
	I0805 16:21:30.518888    4640 round_trippers.go:580]     Audit-Id: c1b08932-ee78-4dcb-a190-3a8b24421284
	I0805 16:21:30.518894    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:30.518899    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:30.519002    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:31.012673    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:31.012701    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:31.012712    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:31.012718    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:31.015543    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:31.015560    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:31.015568    4640 round_trippers.go:580]     Audit-Id: b6586a64-ec07-44ee-8a00-1f3b8a00e0bd
	I0805 16:21:31.015572    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:31.015576    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:31.015580    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:31.015583    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:31.015589    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:31 GMT
	I0805 16:21:31.015682    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:31.512531    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:31.512543    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:31.512550    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:31.512554    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:31.514066    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:31.514076    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:31.514081    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:31 GMT
	I0805 16:21:31.514085    4640 round_trippers.go:580]     Audit-Id: 7d410de7-b0d5-4d4e-8455-d31b0df7d302
	I0805 16:21:31.514089    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:31.514093    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:31.514096    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:31.514107    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:31.514758    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:32.014110    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:32.014136    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:32.014147    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:32.014157    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:32.016553    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:32.016570    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:32.016580    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:32.016586    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:32.016592    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:32.016598    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:32.016602    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:32 GMT
	I0805 16:21:32.016605    4640 round_trippers.go:580]     Audit-Id: 67fdb64b-273a-46c2-aac5-c3b115422aa4
	I0805 16:21:32.016861    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:32.017132    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:32.513171    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:32.513188    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:32.513195    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:32.513198    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:32.514908    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:32.514920    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:32.514925    4640 round_trippers.go:580]     Audit-Id: 0f5a2e98-6be6-4963-8897-91c70642048c
	I0805 16:21:32.514928    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:32.514931    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:32.514933    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:32.514936    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:32.514939    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:32 GMT
	I0805 16:21:32.515082    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:33.013769    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:33.013803    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:33.013814    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:33.013822    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:33.016491    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:33.016509    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:33.016519    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:33 GMT
	I0805 16:21:33.016526    4640 round_trippers.go:580]     Audit-Id: 96b5f269-7be9-42a9-9687-cba57d05f76e
	I0805 16:21:33.016532    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:33.016538    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:33.016543    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:33.016548    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:33.016715    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:33.512751    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:33.512772    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:33.512783    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:33.512789    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:33.515431    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:33.515480    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:33.515498    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:33.515506    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:33.515510    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:33 GMT
	I0805 16:21:33.515513    4640 round_trippers.go:580]     Audit-Id: 6cd252a3-d07d-441e-bcf4-bc3bd00c2488
	I0805 16:21:33.515517    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:33.515520    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:33.515747    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:34.013003    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:34.013032    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:34.013043    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:34.013052    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:34.015447    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:34.015465    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:34.015472    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:34.015476    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:34.015479    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:34.015484    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:34.015487    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:34 GMT
	I0805 16:21:34.015492    4640 round_trippers.go:580]     Audit-Id: efcfb0d1-8345-4db5-bce9-e31085842da3
	I0805 16:21:34.015599    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:34.513298    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:34.513317    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:34.513376    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:34.513383    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:34.515051    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:34.515065    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:34.515072    4640 round_trippers.go:580]     Audit-Id: 2a42cb6a-0051-47bd-85f4-9f8ca80afa70
	I0805 16:21:34.515078    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:34.515081    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:34.515087    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:34.515099    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:34.515103    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:34 GMT
	I0805 16:21:34.515359    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:34.515540    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:35.013932    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:35.013957    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:35.013968    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:35.013976    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:35.016505    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:35.016524    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:35.016530    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:35.016537    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:35.016541    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:35.016544    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:35.016555    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:35 GMT
	I0805 16:21:35.016559    4640 round_trippers.go:580]     Audit-Id: 09fa0e04-c026-439e-9cd7-392fd82b16fe
	I0805 16:21:35.016913    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:35.513491    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:35.513514    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:35.513526    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:35.513532    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:35.515995    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:35.516012    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:35.516020    4640 round_trippers.go:580]     Audit-Id: a2b05a8a-9a91-4d20-93d0-b8701ac59b95
	I0805 16:21:35.516024    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:35.516036    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:35.516041    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:35.516055    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:35.516060    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:35 GMT
	I0805 16:21:35.516151    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:36.013521    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:36.013549    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.013561    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.013566    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.016095    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:36.016112    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.016119    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.016131    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.016136    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.016140    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.016144    4640 round_trippers.go:580]     Audit-Id: 77e04f39-a037-4ea2-9716-ad04139089d1
	I0805 16:21:36.016147    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.016230    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"423","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0805 16:21:36.016465    4640 node_ready.go:49] node "multinode-985000" has status "Ready":"True"
	I0805 16:21:36.016481    4640 node_ready.go:38] duration metric: took 15.504115701s for node "multinode-985000" to be "Ready" ...
	I0805 16:21:36.016489    4640 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:21:36.016543    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:36.016551    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.016559    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.016563    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.019046    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:36.019057    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.019065    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.019069    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.019078    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.019081    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.019084    4640 round_trippers.go:580]     Audit-Id: 96048303-6e62-4ba8-a291-bc1ad976756e
	I0805 16:21:36.019091    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.019721    4640 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"427","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56289 chars]
	I0805 16:21:36.021921    4640 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:36.021960    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:21:36.021964    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.021970    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.021974    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.023179    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:36.023187    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.023192    4640 round_trippers.go:580]     Audit-Id: ba42f387-f106-4773-86de-3a22085fd86a
	I0805 16:21:36.023195    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.023198    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.023200    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.023204    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.023208    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.023410    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"427","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0805 16:21:36.023652    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:36.023659    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.023665    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.023671    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.024732    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:36.024744    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.024752    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.024758    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.024765    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.024768    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.024771    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.024775    4640 round_trippers.go:580]     Audit-Id: 2008721c-b230-4e73-b037-d3a843d7c7c8
	I0805 16:21:36.024909    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"423","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0805 16:21:36.523495    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:21:36.523508    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.523514    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.523519    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.525003    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:36.525014    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.525020    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.525042    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.525049    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.525053    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.525060    4640 round_trippers.go:580]     Audit-Id: 1ad5a8dd-64b3-4881-9a8e-e5eaab368c53
	I0805 16:21:36.525066    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.525202    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"427","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0805 16:21:36.525483    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:36.525490    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.525498    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.525502    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.526801    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:36.526810    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.526814    4640 round_trippers.go:580]     Audit-Id: 71c4017f-a267-489e-86ed-59098eae3b88
	I0805 16:21:36.526817    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.526834    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.526840    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.526846    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.526850    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.527025    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"423","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0805 16:21:37.022759    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:21:37.022781    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.022791    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.022799    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.025487    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:37.025503    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.025510    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.025515    4640 round_trippers.go:580]     Audit-Id: 7446d9fd-22ed-4d20-b0f2-e8c4a88b04f4
	I0805 16:21:37.025536    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.025543    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.025547    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.025556    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.025649    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"427","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0805 16:21:37.026010    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.026020    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.026028    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.026033    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.027337    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:37.027346    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.027354    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.027359    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.027363    4640 round_trippers.go:580]     Audit-Id: a309eed4-f088-47f7-8b84-4761b59dbb8c
	I0805 16:21:37.027366    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.027368    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.027371    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.027425    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.522283    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:21:37.522304    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.522315    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.522322    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.524762    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:37.524776    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.524782    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.524788    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.524792    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.524795    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.524799    4640 round_trippers.go:580]     Audit-Id: eaef42a8-7b43-4091-9b70-8d31adc979e5
	I0805 16:21:37.524803    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.525073    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"443","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6576 chars]
	I0805 16:21:37.525438    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.525480    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.525488    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.525492    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.526890    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:37.526903    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.526912    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.526918    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.526927    4640 round_trippers.go:580]     Audit-Id: a3a0e71a-c982-4504-9fae-e76101688c05
	I0805 16:21:37.526931    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.526935    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.526937    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.527034    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.527211    4640 pod_ready.go:92] pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.527220    4640 pod_ready.go:81] duration metric: took 1.505289062s for pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.527230    4640 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.527259    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-985000
	I0805 16:21:37.527264    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.527269    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.527277    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.528379    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:37.528389    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.528394    4640 round_trippers.go:580]     Audit-Id: 3cf4f372-47fb-4b72-9b30-185d93d01537
	I0805 16:21:37.528401    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.528405    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.528408    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.528411    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.528414    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.528618    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-985000","namespace":"kube-system","uid":"8d7ca2d9-8c7b-41b9-a199-de6449107471","resourceVersion":"379","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.mirror":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.seen":"2024-08-05T23:21:06.366030299Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0805 16:21:37.528833    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.528840    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.528845    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.528850    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.529802    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.529808    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.529813    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.529816    4640 round_trippers.go:580]     Audit-Id: 314df0bd-894e-4607-bad0-3348c18fe807
	I0805 16:21:37.529820    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.529823    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.529826    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.529833    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.530046    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.530203    4640 pod_ready.go:92] pod "etcd-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.530210    4640 pod_ready.go:81] duration metric: took 2.974841ms for pod "etcd-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.530218    4640 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.530249    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-985000
	I0805 16:21:37.530253    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.530259    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.530262    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.531449    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:37.531456    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.531461    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.531463    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.531467    4640 round_trippers.go:580]     Audit-Id: 1801a8f0-22d5-44e8-942c-ea521b1ffa66
	I0805 16:21:37.531469    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.531475    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.531477    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.531592    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-985000","namespace":"kube-system","uid":"9be3378a-5fab-4907-baad-507918e714e4","resourceVersion":"369","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"5908531d711118eab279d6b15448dc42","kubernetes.io/config.mirror":"5908531d711118eab279d6b15448dc42","kubernetes.io/config.seen":"2024-08-05T23:21:06.366030949Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7684 chars]
	I0805 16:21:37.531810    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.531820    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.531825    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.531830    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.532663    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.532668    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.532672    4640 round_trippers.go:580]     Audit-Id: 6d0fc4ed-c609-4ee7-a57f-b61eed1bc442
	I0805 16:21:37.532675    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.532679    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.532682    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.532684    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.532688    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.532807    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.532958    4640 pod_ready.go:92] pod "kube-apiserver-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.532967    4640 pod_ready.go:81] duration metric: took 2.743443ms for pod "kube-apiserver-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.532973    4640 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.533000    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-985000
	I0805 16:21:37.533004    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.533009    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.533012    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.533987    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.533995    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.534000    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.534004    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.534020    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.534027    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.534031    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.534034    4640 round_trippers.go:580]     Audit-Id: 97e4dc5c-f4bf-419e-8b15-be800418054c
	I0805 16:21:37.534147    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-985000","namespace":"kube-system","uid":"4ad64361-65de-4b0b-b2a3-07df18c2e603","resourceVersion":"342","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e41fb21b40cd2f3bd83b000891f6569","kubernetes.io/config.mirror":"8e41fb21b40cd2f3bd83b000891f6569","kubernetes.io/config.seen":"2024-08-05T23:21:06.366027130Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7259 chars]
	I0805 16:21:37.534370    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.534377    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.534383    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.534386    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.535293    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.535301    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.535305    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.535308    4640 round_trippers.go:580]     Audit-Id: a4c04a0a-9401-41d1-a0fc-f2a2187abde4
	I0805 16:21:37.535310    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.535313    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.535320    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.535323    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.535432    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.535591    4640 pod_ready.go:92] pod "kube-controller-manager-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.535599    4640 pod_ready.go:81] duration metric: took 2.621545ms for pod "kube-controller-manager-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.535606    4640 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fwgw7" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.535629    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwgw7
	I0805 16:21:37.535634    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.535639    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.535643    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.536550    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.536557    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.536565    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.536570    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.536575    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.536578    4640 round_trippers.go:580]     Audit-Id: 5a688e80-7db3-4070-a1a8-c3419ddb4d44
	I0805 16:21:37.536580    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.536582    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.536704    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fwgw7","generateName":"kube-proxy-","namespace":"kube-system","uid":"3fb72e39-699d-4123-ae5e-e314a191d904","resourceVersion":"409","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8b6258e6-7b31-4600-b32b-4a269867c123","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8b6258e6-7b31-4600-b32b-4a269867c123\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0805 16:21:37.614745    4640 request.go:629] Waited for 77.807971ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.614815    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.614822    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.614839    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.614845    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.616956    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:37.616984    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.616989    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.616993    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.616996    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.616999    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.617002    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.617005    4640 round_trippers.go:580]     Audit-Id: e297627c-4c52-417b-935c-d406bf086c16
	I0805 16:21:37.617232    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.617428    4640 pod_ready.go:92] pod "kube-proxy-fwgw7" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.617437    4640 pod_ready.go:81] duration metric: took 81.82693ms for pod "kube-proxy-fwgw7" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.617444    4640 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.815296    4640 request.go:629] Waited for 197.761592ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-985000
	I0805 16:21:37.815347    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-985000
	I0805 16:21:37.815355    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.815366    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.815376    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.817961    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:37.817976    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.818001    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.818008    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:37.818049    4640 round_trippers.go:580]     Audit-Id: cc44c4e8-8012-4718-aa24-c05fec399a2e
	I0805 16:21:37.818064    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.818078    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.818082    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.818186    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-985000","namespace":"kube-system","uid":"5e23b1b7-e45d-4b43-831c-aa835c5e536d","resourceVersion":"396","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d110ae14602908970c81c0d8a5c21147","kubernetes.io/config.mirror":"d110ae14602908970c81c0d8a5c21147","kubernetes.io/config.seen":"2024-08-05T23:21:06.366029633Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0805 16:21:38.014472    4640 request.go:629] Waited for 195.947535ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:38.014569    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:38.014578    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.014589    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.014597    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.017395    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:38.017406    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.017413    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.017418    4640 round_trippers.go:580]     Audit-Id: 925efcbc-f43b-4431-905e-26927bb76a48
	I0805 16:21:38.017422    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.017428    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.017434    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.017441    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.017905    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:38.018153    4640 pod_ready.go:92] pod "kube-scheduler-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:38.018164    4640 pod_ready.go:81] duration metric: took 400.713995ms for pod "kube-scheduler-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:38.018173    4640 pod_ready.go:38] duration metric: took 2.001673669s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:21:38.018198    4640 api_server.go:52] waiting for apiserver process to appear ...
	I0805 16:21:38.018268    4640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:21:38.030133    4640 command_runner.go:130] > 1977
	I0805 16:21:38.030360    4640 api_server.go:72] duration metric: took 18.07694495s to wait for apiserver process to appear ...
	I0805 16:21:38.030369    4640 api_server.go:88] waiting for apiserver healthz status ...
	I0805 16:21:38.030384    4640 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:21:38.034009    4640 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0805 16:21:38.034048    4640 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I0805 16:21:38.034052    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.034058    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.034063    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.034646    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:38.034653    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.034658    4640 round_trippers.go:580]     Audit-Id: 9f5c9766-330c-4bb5-a5de-4c3a0fdbe474
	I0805 16:21:38.034662    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.034665    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.034668    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.034670    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.034673    4640 round_trippers.go:580]     Content-Length: 263
	I0805 16:21:38.034676    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.034687    4640 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0805 16:21:38.034733    4640 api_server.go:141] control plane version: v1.30.3
	I0805 16:21:38.034742    4640 api_server.go:131] duration metric: took 4.369143ms to wait for apiserver health ...
	I0805 16:21:38.034747    4640 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 16:21:38.213812    4640 request.go:629] Waited for 178.999213ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:38.213950    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:38.213960    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.213970    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.213980    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.217309    4640 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:21:38.217324    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.217331    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.217336    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.217363    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.217372    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.217377    4640 round_trippers.go:580]     Audit-Id: 0f21513f-44e7-4d2f-bacd-2a12fceef757
	I0805 16:21:38.217381    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.217979    4640 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"443","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0805 16:21:38.219249    4640 system_pods.go:59] 8 kube-system pods found
	I0805 16:21:38.219261    4640 system_pods.go:61] "coredns-7db6d8ff4d-fqtll" [4d8af129-475b-4185-8b0d-cbda67812964] Running
	I0805 16:21:38.219265    4640 system_pods.go:61] "etcd-multinode-985000" [8d7ca2d9-8c7b-41b9-a199-de6449107471] Running
	I0805 16:21:38.219268    4640 system_pods.go:61] "kindnet-tvtvg" [7dd4afe7-2a17-4298-823b-9955e43cfdb2] Running
	I0805 16:21:38.219271    4640 system_pods.go:61] "kube-apiserver-multinode-985000" [9be3378a-5fab-4907-baad-507918e714e4] Running
	I0805 16:21:38.219276    4640 system_pods.go:61] "kube-controller-manager-multinode-985000" [4ad64361-65de-4b0b-b2a3-07df18c2e603] Running
	I0805 16:21:38.219278    4640 system_pods.go:61] "kube-proxy-fwgw7" [3fb72e39-699d-4123-ae5e-e314a191d904] Running
	I0805 16:21:38.219280    4640 system_pods.go:61] "kube-scheduler-multinode-985000" [5e23b1b7-e45d-4b43-831c-aa835c5e536d] Running
	I0805 16:21:38.219283    4640 system_pods.go:61] "storage-provisioner" [72ec8458-5c62-43eb-9120-0146e6ccaf8f] Running
	I0805 16:21:38.219286    4640 system_pods.go:74] duration metric: took 184.535842ms to wait for pod list to return data ...
	I0805 16:21:38.219291    4640 default_sa.go:34] waiting for default service account to be created ...
	I0805 16:21:38.413643    4640 request.go:629] Waited for 194.308242ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:21:38.413680    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:21:38.413687    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.413695    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.413699    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.415522    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:38.415531    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.415536    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.415539    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.415543    4640 round_trippers.go:580]     Content-Length: 261
	I0805 16:21:38.415546    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.415548    4640 round_trippers.go:580]     Audit-Id: efc85c0c-9bbc-4cb7-8c14-19ba2f873800
	I0805 16:21:38.415551    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.415553    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.415563    4640 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b0626468-f73b-4e9b-8270-658495d43f4a","resourceVersion":"337","creationTimestamp":"2024-08-05T23:21:19Z"}}]}
	I0805 16:21:38.415681    4640 default_sa.go:45] found service account: "default"
	I0805 16:21:38.415690    4640 default_sa.go:55] duration metric: took 196.394719ms for default service account to be created ...
	I0805 16:21:38.415697    4640 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 16:21:38.613742    4640 request.go:629] Waited for 198.012461ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:38.613858    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:38.613864    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.613870    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.613874    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.616077    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:38.616090    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.616099    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.616106    4640 round_trippers.go:580]     Audit-Id: 3f8a6f23-788b-41c4-8dee-6ff59c02c21d
	I0805 16:21:38.616112    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.616116    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.616126    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.616143    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.616489    4640 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"443","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0805 16:21:38.617747    4640 system_pods.go:86] 8 kube-system pods found
	I0805 16:21:38.617761    4640 system_pods.go:89] "coredns-7db6d8ff4d-fqtll" [4d8af129-475b-4185-8b0d-cbda67812964] Running
	I0805 16:21:38.617766    4640 system_pods.go:89] "etcd-multinode-985000" [8d7ca2d9-8c7b-41b9-a199-de6449107471] Running
	I0805 16:21:38.617770    4640 system_pods.go:89] "kindnet-tvtvg" [7dd4afe7-2a17-4298-823b-9955e43cfdb2] Running
	I0805 16:21:38.617773    4640 system_pods.go:89] "kube-apiserver-multinode-985000" [9be3378a-5fab-4907-baad-507918e714e4] Running
	I0805 16:21:38.617776    4640 system_pods.go:89] "kube-controller-manager-multinode-985000" [4ad64361-65de-4b0b-b2a3-07df18c2e603] Running
	I0805 16:21:38.617780    4640 system_pods.go:89] "kube-proxy-fwgw7" [3fb72e39-699d-4123-ae5e-e314a191d904] Running
	I0805 16:21:38.617784    4640 system_pods.go:89] "kube-scheduler-multinode-985000" [5e23b1b7-e45d-4b43-831c-aa835c5e536d] Running
	I0805 16:21:38.617787    4640 system_pods.go:89] "storage-provisioner" [72ec8458-5c62-43eb-9120-0146e6ccaf8f] Running
	I0805 16:21:38.617792    4640 system_pods.go:126] duration metric: took 202.090644ms to wait for k8s-apps to be running ...
	I0805 16:21:38.617801    4640 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 16:21:38.617848    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:21:38.629448    4640 system_svc.go:56] duration metric: took 11.643357ms WaitForService to wait for kubelet
	I0805 16:21:38.629463    4640 kubeadm.go:582] duration metric: took 18.676048708s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:21:38.629475    4640 node_conditions.go:102] verifying NodePressure condition ...
	I0805 16:21:38.814057    4640 request.go:629] Waited for 184.539621ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I0805 16:21:38.814182    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0805 16:21:38.814193    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.814205    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.814213    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.817076    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:38.817092    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.817099    4640 round_trippers.go:580]     Audit-Id: 83bb2c88-8ae3-45b7-a0f6-9d3f9fead5f2
	I0805 16:21:38.817103    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.817112    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.817116    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.817123    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.817128    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:39 GMT
	I0805 16:21:38.817200    4640 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5011 chars]
	I0805 16:21:38.817474    4640 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:21:38.817490    4640 node_conditions.go:123] node cpu capacity is 2
	I0805 16:21:38.817502    4640 node_conditions.go:105] duration metric: took 188.023135ms to run NodePressure ...
	I0805 16:21:38.817512    4640 start.go:241] waiting for startup goroutines ...
	I0805 16:21:38.817520    4640 start.go:246] waiting for cluster config update ...
	I0805 16:21:38.817530    4640 start.go:255] writing updated cluster config ...
	I0805 16:21:38.838343    4640 out.go:177] 
	I0805 16:21:38.859405    4640 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:21:38.859465    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:21:38.881260    4640 out.go:177] * Starting "multinode-985000-m02" worker node in "multinode-985000" cluster
	I0805 16:21:38.923226    4640 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:21:38.923254    4640 cache.go:56] Caching tarball of preloaded images
	I0805 16:21:38.923425    4640 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:21:38.923439    4640 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:21:38.923503    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:21:38.924257    4640 start.go:360] acquireMachinesLock for multinode-985000-m02: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:21:38.924355    4640 start.go:364] duration metric: took 78.775µs to acquireMachinesLock for "multinode-985000-m02"
	I0805 16:21:38.924379    4640 start.go:93] Provisioning new machine with config: &{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0805 16:21:38.924443    4640 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0805 16:21:38.946258    4640 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 16:21:38.946431    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:38.946482    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:38.956315    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52515
	I0805 16:21:38.956651    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:38.957008    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:38.957028    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:38.957245    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:38.957408    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:21:38.957527    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:38.957642    4640 start.go:159] libmachine.API.Create for "multinode-985000" (driver="hyperkit")
	I0805 16:21:38.957663    4640 client.go:168] LocalClient.Create starting
	I0805 16:21:38.957697    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem
	I0805 16:21:38.957735    4640 main.go:141] libmachine: Decoding PEM data...
	I0805 16:21:38.957747    4640 main.go:141] libmachine: Parsing certificate...
	I0805 16:21:38.957790    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem
	I0805 16:21:38.957819    4640 main.go:141] libmachine: Decoding PEM data...
	I0805 16:21:38.957833    4640 main.go:141] libmachine: Parsing certificate...
	I0805 16:21:38.957849    4640 main.go:141] libmachine: Running pre-create checks...
	I0805 16:21:38.957855    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .PreCreateCheck
	I0805 16:21:38.957933    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:38.957959    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetConfigRaw
	I0805 16:21:38.967700    4640 main.go:141] libmachine: Creating machine...
	I0805 16:21:38.967725    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .Create
	I0805 16:21:38.967957    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:38.968233    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | I0805 16:21:38.967940    4677 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:21:38.968338    4640 main.go:141] libmachine: (multinode-985000-m02) Downloading /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1122/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 16:21:39.171726    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | I0805 16:21:39.171650    4677 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa...
	I0805 16:21:39.251408    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | I0805 16:21:39.251327    4677 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/multinode-985000-m02.rawdisk...
	I0805 16:21:39.251421    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Writing magic tar header
	I0805 16:21:39.251439    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Writing SSH key tar header
	I0805 16:21:39.252021    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | I0805 16:21:39.251983    4677 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02 ...
	I0805 16:21:39.622286    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:39.622309    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid
	I0805 16:21:39.622382    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Using UUID ab5b9c9f-9e28-4bc2-8fcd-b98fce011173
	I0805 16:21:39.647304    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Generated MAC a6:1c:88:9c:44:3
	I0805 16:21:39.647324    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000
	I0805 16:21:39.647363    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0805 16:21:39.647396    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0805 16:21:39.647440    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/multinode-985000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage,/Users/j
enkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"}
	I0805 16:21:39.647475    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ab5b9c9f-9e28-4bc2-8fcd-b98fce011173 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/multinode-985000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/mult
inode-985000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"
	I0805 16:21:39.647493    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:21:39.650407    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: Pid is 4678
	I0805 16:21:39.650823    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 0
	I0805 16:21:39.650838    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:39.650909    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:39.651807    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:39.651870    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:39.651899    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:39.651984    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:39.652006    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:39.652022    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:39.652032    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:39.652039    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:39.652046    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:39.652082    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:39.652100    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:39.652113    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:39.652123    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:39.652143    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:39.657903    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:21:39.666018    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:21:39.666937    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:21:39.666963    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:21:39.666975    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:21:39.666990    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:21:40.050205    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:21:40.050221    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:21:40.165006    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:21:40.165028    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:21:40.165042    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:21:40.165049    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:21:40.165899    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:21:40.165911    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:21:41.653048    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 1
	I0805 16:21:41.653066    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:41.653144    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:41.653911    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:41.653968    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:41.653979    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:41.653992    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:41.653998    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:41.654006    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:41.654015    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:41.654030    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:41.654045    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:41.654053    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:41.654061    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:41.654070    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:41.654078    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:41.654093    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:43.655366    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 2
	I0805 16:21:43.655382    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:43.655471    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:43.656243    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:43.656291    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:43.656301    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:43.656319    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:43.656329    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:43.656351    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:43.656362    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:43.656369    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:43.656375    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:43.656391    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:43.656406    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:43.656416    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:43.656423    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:43.656437    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:45.657345    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 3
	I0805 16:21:45.657361    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:45.657459    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:45.658214    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:45.658269    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:45.658278    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:45.658286    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:45.658295    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:45.658310    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:45.658321    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:45.658329    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:45.658337    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:45.658349    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:45.658362    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:45.658370    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:45.658378    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:45.658387    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:45.751756    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:45 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:21:45.751812    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:45 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:21:45.751830    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:45 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:21:45.774801    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:45 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:21:47.659182    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 4
	I0805 16:21:47.659208    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:47.659291    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:47.660062    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:47.660112    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:47.660128    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:47.660137    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:47.660145    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:47.660153    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:47.660162    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:47.660178    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:47.660192    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:47.660204    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:47.660218    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:47.660230    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:47.660240    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:47.660260    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:49.662115    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 5
	I0805 16:21:49.662148    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:49.662310    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:49.663748    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:49.663812    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I0805 16:21:49.663831    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b00c}
	I0805 16:21:49.663846    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found match: a6:1c:88:9c:44:3
	I0805 16:21:49.663856    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | IP: 192.169.0.14
	I0805 16:21:49.663945    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetConfigRaw
	I0805 16:21:49.664855    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:49.665006    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:49.665127    4640 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 16:21:49.665139    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetState
	I0805 16:21:49.665271    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:49.665344    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:49.666326    4640 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 16:21:49.666337    4640 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 16:21:49.666342    4640 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 16:21:49.666348    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.666471    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:49.666603    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.666743    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.666869    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:49.667045    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:49.667279    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:49.667287    4640 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 16:21:49.724369    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:21:49.724382    4640 main.go:141] libmachine: Detecting the provisioner...
	I0805 16:21:49.724388    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.724522    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:49.724626    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.724719    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.724810    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:49.724938    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:49.725087    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:49.725094    4640 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 16:21:49.782403    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 16:21:49.782454    4640 main.go:141] libmachine: found compatible host: buildroot
	I0805 16:21:49.782460    4640 main.go:141] libmachine: Provisioning with buildroot...
	I0805 16:21:49.782466    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:21:49.782595    4640 buildroot.go:166] provisioning hostname "multinode-985000-m02"
	I0805 16:21:49.782606    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:21:49.782698    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.782797    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:49.782871    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.782964    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.783079    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:49.783204    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:49.783350    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:49.783359    4640 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-985000-m02 && echo "multinode-985000-m02" | sudo tee /etc/hostname
	I0805 16:21:49.854175    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-985000-m02
	
	I0805 16:21:49.854190    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.854319    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:49.854421    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.854492    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.854587    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:49.854712    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:49.854870    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:49.854882    4640 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-985000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-985000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-985000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:21:49.917814    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:21:49.917830    4640 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:21:49.917840    4640 buildroot.go:174] setting up certificates
	I0805 16:21:49.917846    4640 provision.go:84] configureAuth start
	I0805 16:21:49.917856    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:21:49.917985    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:21:49.918095    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.918192    4640 provision.go:143] copyHostCerts
	I0805 16:21:49.918223    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:21:49.918280    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:21:49.918285    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:21:49.918411    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:21:49.918617    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:21:49.918652    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:21:49.918658    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:21:49.918733    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:21:49.918888    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:21:49.918922    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:21:49.918927    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:21:49.918994    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:21:49.919145    4640 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.multinode-985000-m02 san=[127.0.0.1 192.169.0.14 localhost minikube multinode-985000-m02]
	I0805 16:21:50.072896    4640 provision.go:177] copyRemoteCerts
	I0805 16:21:50.072947    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:21:50.072962    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:50.073107    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:50.073199    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.073317    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:50.073426    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:21:50.108446    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:21:50.108519    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:21:50.128617    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:21:50.128684    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0805 16:21:50.148653    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:21:50.148720    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:21:50.168682    4640 provision.go:87] duration metric: took 250.828344ms to configureAuth
	I0805 16:21:50.168695    4640 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:21:50.168835    4640 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:21:50.168849    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:50.168993    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:50.169087    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:50.169175    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.169262    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.169345    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:50.169486    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:50.169621    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:50.169628    4640 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:21:50.228062    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:21:50.228074    4640 buildroot.go:70] root file system type: tmpfs
	I0805 16:21:50.228150    4640 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:21:50.228164    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:50.228293    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:50.228388    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.228480    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.228586    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:50.228755    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:50.228888    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:50.228934    4640 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.13"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:21:50.296901    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.13
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:21:50.296919    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:50.297064    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:50.297158    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.297250    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.297333    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:50.297475    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:50.297611    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:50.297624    4640 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:21:51.873922    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:21:51.873940    4640 main.go:141] libmachine: Checking connection to Docker...
	I0805 16:21:51.873964    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetURL
	I0805 16:21:51.874107    4640 main.go:141] libmachine: Docker is up and running!
	I0805 16:21:51.874115    4640 main.go:141] libmachine: Reticulating splines...
	I0805 16:21:51.874120    4640 client.go:171] duration metric: took 12.916447572s to LocalClient.Create
	I0805 16:21:51.874129    4640 start.go:167] duration metric: took 12.916485141s to libmachine.API.Create "multinode-985000"
	I0805 16:21:51.874135    4640 start.go:293] postStartSetup for "multinode-985000-m02" (driver="hyperkit")
	I0805 16:21:51.874142    4640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:21:51.874152    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:51.874292    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:21:51.874313    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:51.874416    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:51.874505    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:51.874583    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:51.874657    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:21:51.915394    4640 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:21:51.919538    4640 command_runner.go:130] > NAME=Buildroot
	I0805 16:21:51.919549    4640 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0805 16:21:51.919553    4640 command_runner.go:130] > ID=buildroot
	I0805 16:21:51.919557    4640 command_runner.go:130] > VERSION_ID=2023.02.9
	I0805 16:21:51.919560    4640 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0805 16:21:51.919635    4640 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:21:51.919645    4640 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:21:51.919746    4640 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:21:51.919897    4640 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:21:51.919903    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:21:51.920070    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:21:51.929531    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:21:51.959146    4640 start.go:296] duration metric: took 85.003807ms for postStartSetup
	I0805 16:21:51.959174    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetConfigRaw
	I0805 16:21:51.959830    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:21:51.959996    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:21:51.960355    4640 start.go:128] duration metric: took 13.03589336s to createHost
	I0805 16:21:51.960370    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:51.960461    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:51.960532    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:51.960607    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:51.960679    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:51.960792    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:51.960921    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:51.960928    4640 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0805 16:21:52.018527    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722900112.019707412
	
	I0805 16:21:52.018539    4640 fix.go:216] guest clock: 1722900112.019707412
	I0805 16:21:52.018544    4640 fix.go:229] Guest: 2024-08-05 16:21:52.019707412 -0700 PDT Remote: 2024-08-05 16:21:51.960363 -0700 PDT m=+79.692294773 (delta=59.344412ms)
	I0805 16:21:52.018555    4640 fix.go:200] guest clock delta is within tolerance: 59.344412ms
	I0805 16:21:52.018561    4640 start.go:83] releasing machines lock for "multinode-985000-m02", held for 13.094193048s
	I0805 16:21:52.018577    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:52.018703    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:21:52.040117    4640 out.go:177] * Found network options:
	I0805 16:21:52.084887    4640 out.go:177]   - NO_PROXY=192.169.0.13
	W0805 16:21:52.106885    4640 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:21:52.106945    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:52.107811    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:52.108153    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:52.108320    4640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:21:52.108371    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	W0805 16:21:52.108412    4640 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:21:52.108519    4640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 16:21:52.108545    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:52.108628    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:52.108772    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:52.108842    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:52.108951    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:52.109026    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:52.109176    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:52.109197    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:21:52.109323    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:21:52.141829    4640 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0805 16:21:52.141939    4640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:21:52.141993    4640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:21:52.191903    4640 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0805 16:21:52.192466    4640 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0805 16:21:52.192507    4640 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:21:52.192514    4640 start.go:495] detecting cgroup driver to use...
	I0805 16:21:52.192581    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:21:52.208225    4640 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0805 16:21:52.208528    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:21:52.217078    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:21:52.225489    4640 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:21:52.225534    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:21:52.233992    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:21:52.242465    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:21:52.250835    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:21:52.260065    4640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:21:52.268863    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:21:52.277242    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:21:52.285501    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:21:52.293845    4640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:21:52.301185    4640 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0805 16:21:52.301319    4640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:21:52.308881    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:21:52.403323    4640 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:21:52.423722    4640 start.go:495] detecting cgroup driver to use...
	I0805 16:21:52.423794    4640 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:21:52.442557    4640 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0805 16:21:52.443108    4640 command_runner.go:130] > [Unit]
	I0805 16:21:52.443119    4640 command_runner.go:130] > Description=Docker Application Container Engine
	I0805 16:21:52.443124    4640 command_runner.go:130] > Documentation=https://docs.docker.com
	I0805 16:21:52.443128    4640 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0805 16:21:52.443132    4640 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0805 16:21:52.443136    4640 command_runner.go:130] > StartLimitBurst=3
	I0805 16:21:52.443141    4640 command_runner.go:130] > StartLimitIntervalSec=60
	I0805 16:21:52.443147    4640 command_runner.go:130] > [Service]
	I0805 16:21:52.443151    4640 command_runner.go:130] > Type=notify
	I0805 16:21:52.443155    4640 command_runner.go:130] > Restart=on-failure
	I0805 16:21:52.443160    4640 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13
	I0805 16:21:52.443165    4640 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0805 16:21:52.443175    4640 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0805 16:21:52.443182    4640 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0805 16:21:52.443188    4640 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0805 16:21:52.443194    4640 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0805 16:21:52.443200    4640 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0805 16:21:52.443212    4640 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0805 16:21:52.443224    4640 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0805 16:21:52.443231    4640 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0805 16:21:52.443234    4640 command_runner.go:130] > ExecStart=
	I0805 16:21:52.443246    4640 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0805 16:21:52.443250    4640 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0805 16:21:52.443256    4640 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0805 16:21:52.443262    4640 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0805 16:21:52.443265    4640 command_runner.go:130] > LimitNOFILE=infinity
	I0805 16:21:52.443269    4640 command_runner.go:130] > LimitNPROC=infinity
	I0805 16:21:52.443272    4640 command_runner.go:130] > LimitCORE=infinity
	I0805 16:21:52.443277    4640 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0805 16:21:52.443282    4640 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0805 16:21:52.443285    4640 command_runner.go:130] > TasksMax=infinity
	I0805 16:21:52.443290    4640 command_runner.go:130] > TimeoutStartSec=0
	I0805 16:21:52.443296    4640 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0805 16:21:52.443299    4640 command_runner.go:130] > Delegate=yes
	I0805 16:21:52.443304    4640 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0805 16:21:52.443313    4640 command_runner.go:130] > KillMode=process
	I0805 16:21:52.443317    4640 command_runner.go:130] > [Install]
	I0805 16:21:52.443321    4640 command_runner.go:130] > WantedBy=multi-user.target
	I0805 16:21:52.443454    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:21:52.455112    4640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:21:52.472976    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:21:52.485648    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:21:52.496640    4640 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:21:52.520742    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:21:52.532843    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:21:52.547391    4640 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0805 16:21:52.547619    4640 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:21:52.550475    4640 command_runner.go:130] > /usr/bin/cri-dockerd
	I0805 16:21:52.550551    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:21:52.558821    4640 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:21:52.572801    4640 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:21:52.669948    4640 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:21:52.772017    4640 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:21:52.772038    4640 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:21:52.785587    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:21:52.887001    4640 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:22:53.782764    4640 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0805 16:22:53.782779    4640 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0805 16:22:53.782788    4640 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.895755367s)
	I0805 16:22:53.782849    4640 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0805 16:22:53.791796    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0805 16:22:53.791808    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578059613Z" level=info msg="Starting up"
	I0805 16:22:53.791820    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578746899Z" level=info msg="containerd not running, starting managed containerd"
	I0805 16:22:53.791833    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.579364099Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	I0805 16:22:53.791843    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.597194743Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0805 16:22:53.791853    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613422882Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0805 16:22:53.791865    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613448264Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0805 16:22:53.791875    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613527396Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0805 16:22:53.791884    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613540484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791897    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613598776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.791906    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613664323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791924    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613844698Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.791936    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613881896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791948    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613894727Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.791957    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613902000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791967    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614005875Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791976    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614259691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791991    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615867073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.792000    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615974584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.792024    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616138996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.792033    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616172823Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0805 16:22:53.792042    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616291383Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0805 16:22:53.792050    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616398312Z" level=info msg="metadata content store policy set" policy=shared
	I0805 16:22:53.792059    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.618998610Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0805 16:22:53.792068    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619065338Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0805 16:22:53.792076    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619081703Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0805 16:22:53.792085    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619092273Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0805 16:22:53.792094    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619101426Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0805 16:22:53.792103    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619164798Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0805 16:22:53.792113    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619370752Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0805 16:22:53.792121    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619460644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0805 16:22:53.792129    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619495461Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0805 16:22:53.792138    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619506581Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0805 16:22:53.792148    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619515758Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792158    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619524383Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792170    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619532546Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792178    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619541391Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792187    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619550990Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792197    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619565508Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792266    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619576616Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792278    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619584035Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792291    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619598072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792299    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619608190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792307    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619616319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792316    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619625389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792326    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619634123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792335    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619648148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792344    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619658942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792353    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619667668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792362    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619676302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792371    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619686416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792380    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619694011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792388    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619701566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792397    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619709342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792406    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619719250Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0805 16:22:53.792415    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619733203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792423    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619741785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792432    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619749153Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0805 16:22:53.792442    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619797467Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0805 16:22:53.792454    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619811479Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0805 16:22:53.792467    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619819137Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0805 16:22:53.792661    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619826861Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0805 16:22:53.792673    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619833500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792682    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619841896Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0805 16:22:53.792690    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619852419Z" level=info msg="NRI interface is disabled by configuration."
	I0805 16:22:53.792702    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620071162Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0805 16:22:53.792710    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620124755Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0805 16:22:53.792718    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620155079Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0805 16:22:53.792725    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620168148Z" level=info msg="containerd successfully booted in 0.023750s"
	I0805 16:22:53.792734    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.639692405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0805 16:22:53.792741    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.644102102Z" level=info msg="Loading containers: start."
	I0805 16:22:53.792763    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.740540264Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0805 16:22:53.792774    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.826229634Z" level=info msg="Loading containers: done."
	I0805 16:22:53.792783    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843276878Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	I0805 16:22:53.792792    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843375843Z" level=info msg="Daemon has completed initialization"
	I0805 16:22:53.792800    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869275976Z" level=info msg="API listen on /var/run/docker.sock"
	I0805 16:22:53.792807    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869434474Z" level=info msg="API listen on [::]:2376"
	I0805 16:22:53.792813    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 systemd[1]: Started Docker Application Container Engine.
	I0805 16:22:53.792821    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.919662359Z" level=info msg="Processing signal 'terminated'"
	I0805 16:22:53.792829    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920773928Z" level=info msg="Daemon shutdown complete"
	I0805 16:22:53.792840    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920792538Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0805 16:22:53.792852    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920845272Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0805 16:22:53.792861    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920858866Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0805 16:22:53.792868    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0805 16:22:53.792874    4640 command_runner.go:130] > Aug 05 23:21:53 multinode-985000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0805 16:22:53.792904    4640 command_runner.go:130] > Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0805 16:22:53.792911    4640 command_runner.go:130] > Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0805 16:22:53.792918    4640 command_runner.go:130] > Aug 05 23:21:53 multinode-985000-m02 dockerd[923]: time="2024-08-05T23:21:53.957339969Z" level=info msg="Starting up"
	I0805 16:22:53.792929    4640 command_runner.go:130] > Aug 05 23:22:53 multinode-985000-m02 dockerd[923]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0805 16:22:53.792940    4640 command_runner.go:130] > Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0805 16:22:53.792946    4640 command_runner.go:130] > Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0805 16:22:53.792952    4640 command_runner.go:130] > Aug 05 23:22:53 multinode-985000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0805 16:22:53.817223    4640 out.go:177] 
	W0805 16:22:53.838182    4640 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 05 23:21:50 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578059613Z" level=info msg="Starting up"
	Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578746899Z" level=info msg="containerd not running, starting managed containerd"
	Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.579364099Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.597194743Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613422882Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613448264Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613527396Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613540484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613598776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613664323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613844698Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613881896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613894727Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613902000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614005875Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614259691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615867073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615974584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616138996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616172823Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616291383Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616398312Z" level=info msg="metadata content store policy set" policy=shared
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.618998610Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619065338Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619081703Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619092273Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619101426Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619164798Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619370752Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619460644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619495461Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619506581Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619515758Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619524383Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619532546Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619541391Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619550990Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619565508Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619576616Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619584035Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619598072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619608190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619616319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619625389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619634123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619648148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619658942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619667668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619676302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619686416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619694011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619701566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619709342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619719250Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619733203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619741785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619749153Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619797467Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619811479Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619819137Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619826861Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619833500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619841896Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619852419Z" level=info msg="NRI interface is disabled by configuration."
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620071162Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620124755Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620155079Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620168148Z" level=info msg="containerd successfully booted in 0.023750s"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.639692405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.644102102Z" level=info msg="Loading containers: start."
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.740540264Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.826229634Z" level=info msg="Loading containers: done."
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843276878Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843375843Z" level=info msg="Daemon has completed initialization"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869275976Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869434474Z" level=info msg="API listen on [::]:2376"
	Aug 05 23:21:51 multinode-985000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.919662359Z" level=info msg="Processing signal 'terminated'"
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920773928Z" level=info msg="Daemon shutdown complete"
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920792538Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920845272Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920858866Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 05 23:21:52 multinode-985000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 05 23:21:53 multinode-985000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:21:53 multinode-985000-m02 dockerd[923]: time="2024-08-05T23:21:53.957339969Z" level=info msg="Starting up"
	Aug 05 23:22:53 multinode-985000-m02 dockerd[923]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 05 23:22:53 multinode-985000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 05 23:21:50 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578059613Z" level=info msg="Starting up"
	Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578746899Z" level=info msg="containerd not running, starting managed containerd"
	Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.579364099Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.597194743Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613422882Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613448264Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613527396Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613540484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613598776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613664323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613844698Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613881896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613894727Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613902000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614005875Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614259691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615867073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615974584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616138996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616172823Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616291383Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616398312Z" level=info msg="metadata content store policy set" policy=shared
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.618998610Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619065338Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619081703Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619092273Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619101426Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619164798Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619370752Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619460644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619495461Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619506581Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619515758Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619524383Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619532546Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619541391Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619550990Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619565508Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619576616Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619584035Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619598072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619608190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619616319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619625389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619634123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619648148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619658942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619667668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619676302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619686416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619694011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619701566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619709342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619719250Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619733203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619741785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619749153Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619797467Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619811479Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619819137Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619826861Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619833500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619841896Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619852419Z" level=info msg="NRI interface is disabled by configuration."
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620071162Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620124755Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620155079Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620168148Z" level=info msg="containerd successfully booted in 0.023750s"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.639692405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.644102102Z" level=info msg="Loading containers: start."
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.740540264Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.826229634Z" level=info msg="Loading containers: done."
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843276878Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843375843Z" level=info msg="Daemon has completed initialization"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869275976Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869434474Z" level=info msg="API listen on [::]:2376"
	Aug 05 23:21:51 multinode-985000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.919662359Z" level=info msg="Processing signal 'terminated'"
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920773928Z" level=info msg="Daemon shutdown complete"
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920792538Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920845272Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920858866Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 05 23:21:52 multinode-985000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 05 23:21:53 multinode-985000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:21:53 multinode-985000-m02 dockerd[923]: time="2024-08-05T23:21:53.957339969Z" level=info msg="Starting up"
	Aug 05 23:22:53 multinode-985000-m02 dockerd[923]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 05 23:22:53 multinode-985000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0805 16:22:53.838301    4640 out.go:239] * 
	* 
	W0805 16:22:53.839537    4640 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:22:53.901092    4640 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-985000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000
helpers_test.go:244: <<< TestMultiNode/serial/FreshStart2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-985000 logs -n 25: (2.096890969s)
helpers_test.go:252: TestMultiNode/serial/FreshStart2Nodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------|--------------------------|----------|---------|---------------------|---------------------|
	| Command |                   Args                   |         Profile          |   User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------|--------------------------|----------|---------|---------------------|---------------------|
	| stop    | ha-968000 stop -v=7                      | ha-968000                | jenkins  | v1.33.1 | 05 Aug 24 16:12 PDT | 05 Aug 24 16:12 PDT |
	|         | --alsologtostderr                        |                          |          |         |                     |                     |
	| start   | -p ha-968000 --wait=true                 | ha-968000                | jenkins  | v1.33.1 | 05 Aug 24 16:12 PDT |                     |
	|         | -v=7 --alsologtostderr                   |                          |          |         |                     |                     |
	|         | --driver=hyperkit                        |                          |          |         |                     |                     |
	| node    | add -p ha-968000                         | ha-968000                | jenkins  | v1.33.1 | 05 Aug 24 16:14 PDT |                     |
	|         | --control-plane -v=7                     |                          |          |         |                     |                     |
	|         | --alsologtostderr                        |                          |          |         |                     |                     |
	| delete  | -p ha-968000                             | ha-968000                | jenkins  | v1.33.1 | 05 Aug 24 16:14 PDT | 05 Aug 24 16:14 PDT |
	| start   | -p image-364000                          | image-364000             | jenkins  | v1.33.1 | 05 Aug 24 16:14 PDT | 05 Aug 24 16:14 PDT |
	|         | --driver=hyperkit                        |                          |          |         |                     |                     |
	| image   | build -t aaa:latest                      | image-364000             | jenkins  | v1.33.1 | 05 Aug 24 16:14 PDT | 05 Aug 24 16:14 PDT |
	|         | ./testdata/image-build/test-normal       |                          |          |         |                     |                     |
	|         | -p image-364000                          |                          |          |         |                     |                     |
	| image   | build -t aaa:latest                      | image-364000             | jenkins  | v1.33.1 | 05 Aug 24 16:14 PDT | 05 Aug 24 16:14 PDT |
	|         | --build-opt=build-arg=ENV_A=test_env_str |                          |          |         |                     |                     |
	|         | --build-opt=no-cache                     |                          |          |         |                     |                     |
	|         | ./testdata/image-build/test-arg -p       |                          |          |         |                     |                     |
	|         | image-364000                             |                          |          |         |                     |                     |
	| image   | build -t aaa:latest                      | image-364000             | jenkins  | v1.33.1 | 05 Aug 24 16:14 PDT | 05 Aug 24 16:14 PDT |
	|         | ./testdata/image-build/test-normal       |                          |          |         |                     |                     |
	|         | --build-opt=no-cache -p                  |                          |          |         |                     |                     |
	|         | image-364000                             |                          |          |         |                     |                     |
	| image   | build -t aaa:latest                      | image-364000             | jenkins  | v1.33.1 | 05 Aug 24 16:14 PDT | 05 Aug 24 16:14 PDT |
	|         | -f inner/Dockerfile                      |                          |          |         |                     |                     |
	|         | ./testdata/image-build/test-f            |                          |          |         |                     |                     |
	|         | -p image-364000                          |                          |          |         |                     |                     |
	| delete  | -p image-364000                          | image-364000             | jenkins  | v1.33.1 | 05 Aug 24 16:14 PDT | 05 Aug 24 16:15 PDT |
	| start   | -p json-output-702000                    | json-output-702000       | testUser | v1.33.1 | 05 Aug 24 16:15 PDT | 05 Aug 24 16:16 PDT |
	|         | --output=json --user=testUser            |                          |          |         |                     |                     |
	|         | --memory=2200 --wait=true                |                          |          |         |                     |                     |
	|         | --driver=hyperkit                        |                          |          |         |                     |                     |
	| pause   | -p json-output-702000                    | json-output-702000       | testUser | v1.33.1 | 05 Aug 24 16:16 PDT | 05 Aug 24 16:16 PDT |
	|         | --output=json --user=testUser            |                          |          |         |                     |                     |
	| unpause | -p json-output-702000                    | json-output-702000       | testUser | v1.33.1 | 05 Aug 24 16:16 PDT | 05 Aug 24 16:16 PDT |
	|         | --output=json --user=testUser            |                          |          |         |                     |                     |
	| stop    | -p json-output-702000                    | json-output-702000       | testUser | v1.33.1 | 05 Aug 24 16:16 PDT | 05 Aug 24 16:16 PDT |
	|         | --output=json --user=testUser            |                          |          |         |                     |                     |
	| delete  | -p json-output-702000                    | json-output-702000       | jenkins  | v1.33.1 | 05 Aug 24 16:16 PDT | 05 Aug 24 16:16 PDT |
	| start   | -p json-output-error-623000              | json-output-error-623000 | jenkins  | v1.33.1 | 05 Aug 24 16:16 PDT |                     |
	|         | --memory=2200 --output=json              |                          |          |         |                     |                     |
	|         | --wait=true --driver=fail                |                          |          |         |                     |                     |
	| delete  | -p json-output-error-623000              | json-output-error-623000 | jenkins  | v1.33.1 | 05 Aug 24 16:16 PDT | 05 Aug 24 16:16 PDT |
	| start   | -p first-742000                          | first-742000             | jenkins  | v1.33.1 | 05 Aug 24 16:16 PDT | 05 Aug 24 16:17 PDT |
	|         | --driver=hyperkit                        |                          |          |         |                     |                     |
	| start   | -p second-744000                         | second-744000            | jenkins  | v1.33.1 | 05 Aug 24 16:17 PDT | 05 Aug 24 16:18 PDT |
	|         | --driver=hyperkit                        |                          |          |         |                     |                     |
	| delete  | -p second-744000                         | second-744000            | jenkins  | v1.33.1 | 05 Aug 24 16:18 PDT | 05 Aug 24 16:18 PDT |
	| delete  | -p first-742000                          | first-742000             | jenkins  | v1.33.1 | 05 Aug 24 16:18 PDT | 05 Aug 24 16:18 PDT |
	| start   | -p mount-start-1-684000                  | mount-start-1-684000     | jenkins  | v1.33.1 | 05 Aug 24 16:18 PDT |                     |
	|         | --memory=2048 --mount                    |                          |          |         |                     |                     |
	|         | --mount-gid 0 --mount-msize              |                          |          |         |                     |                     |
	|         | 6543 --mount-port 46464                  |                          |          |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes            |                          |          |         |                     |                     |
	|         | --driver=hyperkit                        |                          |          |         |                     |                     |
	| delete  | -p mount-start-2-703000                  | mount-start-2-703000     | jenkins  | v1.33.1 | 05 Aug 24 16:20 PDT | 05 Aug 24 16:20 PDT |
	| delete  | -p mount-start-1-684000                  | mount-start-1-684000     | jenkins  | v1.33.1 | 05 Aug 24 16:20 PDT | 05 Aug 24 16:20 PDT |
	| start   | -p multinode-985000                      | multinode-985000         | jenkins  | v1.33.1 | 05 Aug 24 16:20 PDT |                     |
	|         | --wait=true --memory=2200                |                          |          |         |                     |                     |
	|         | --nodes=2 -v=8                           |                          |          |         |                     |                     |
	|         | --alsologtostderr                        |                          |          |         |                     |                     |
	|         | --driver=hyperkit                        |                          |          |         |                     |                     |
	|---------|------------------------------------------|--------------------------|----------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 16:20:32
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 16:20:32.303800    4640 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:20:32.303980    4640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:20:32.303986    4640 out.go:304] Setting ErrFile to fd 2...
	I0805 16:20:32.303990    4640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:20:32.304163    4640 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:20:32.305609    4640 out.go:298] Setting JSON to false
	I0805 16:20:32.329307    4640 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3003,"bootTime":1722897029,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0805 16:20:32.329400    4640 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:20:32.351877    4640 out.go:177] * [multinode-985000] minikube v1.33.1 on Darwin 14.5
	I0805 16:20:32.392940    4640 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:20:32.393020    4640 notify.go:220] Checking for updates...
	I0805 16:20:32.435775    4640 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:20:32.456783    4640 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0805 16:20:32.477872    4640 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:20:32.499010    4640 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:20:32.519936    4640 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:20:32.541363    4640 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:20:32.571784    4640 out.go:177] * Using the hyperkit driver based on user configuration
	I0805 16:20:32.613992    4640 start.go:297] selected driver: hyperkit
	I0805 16:20:32.614020    4640 start.go:901] validating driver "hyperkit" against <nil>
	I0805 16:20:32.614042    4640 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:20:32.618322    4640 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:20:32.618456    4640 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19373-1122/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0805 16:20:32.627075    4640 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0805 16:20:32.631391    4640 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:20:32.631417    4640 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0805 16:20:32.631452    4640 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 16:20:32.631678    4640 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:20:32.631709    4640 cni.go:84] Creating CNI manager for ""
	I0805 16:20:32.631719    4640 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0805 16:20:32.631730    4640 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0805 16:20:32.631823    4640 start.go:340] cluster config:
	{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:20:32.631925    4640 iso.go:125] acquiring lock: {Name:mk71e8d40232ece83c91dc82184f03ab93aee56e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:20:32.673756    4640 out.go:177] * Starting "multinode-985000" primary control-plane node in "multinode-985000" cluster
	I0805 16:20:32.695001    4640 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:20:32.695088    4640 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0805 16:20:32.695107    4640 cache.go:56] Caching tarball of preloaded images
	I0805 16:20:32.695319    4640 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:20:32.695338    4640 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:20:32.695809    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:20:32.695848    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json: {Name:mk470c2e849a0c86ee251e86e74d9f6dfdb47dad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:32.696485    4640 start.go:360] acquireMachinesLock for multinode-985000: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:20:32.696593    4640 start.go:364] duration metric: took 88.666µs to acquireMachinesLock for "multinode-985000"
	I0805 16:20:32.696646    4640 start.go:93] Provisioning new machine with config: &{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:20:32.696745    4640 start.go:125] createHost starting for "" (driver="hyperkit")
	I0805 16:20:32.718059    4640 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 16:20:32.718351    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:20:32.718416    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:20:32.728195    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52477
	I0805 16:20:32.728547    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:20:32.728938    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:20:32.728948    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:20:32.729147    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:20:32.729251    4640 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:20:32.729369    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:32.729498    4640 start.go:159] libmachine.API.Create for "multinode-985000" (driver="hyperkit")
	I0805 16:20:32.729521    4640 client.go:168] LocalClient.Create starting
	I0805 16:20:32.729556    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem
	I0805 16:20:32.729608    4640 main.go:141] libmachine: Decoding PEM data...
	I0805 16:20:32.729625    4640 main.go:141] libmachine: Parsing certificate...
	I0805 16:20:32.729685    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem
	I0805 16:20:32.729724    4640 main.go:141] libmachine: Decoding PEM data...
	I0805 16:20:32.729737    4640 main.go:141] libmachine: Parsing certificate...
	I0805 16:20:32.729749    4640 main.go:141] libmachine: Running pre-create checks...
	I0805 16:20:32.729760    4640 main.go:141] libmachine: (multinode-985000) Calling .PreCreateCheck
	I0805 16:20:32.729840    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:32.729974    4640 main.go:141] libmachine: (multinode-985000) Calling .GetConfigRaw
	I0805 16:20:32.739224    4640 main.go:141] libmachine: Creating machine...
	I0805 16:20:32.739247    4640 main.go:141] libmachine: (multinode-985000) Calling .Create
	I0805 16:20:32.739475    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:32.739754    4640 main.go:141] libmachine: (multinode-985000) DBG | I0805 16:20:32.739457    4648 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:20:32.739852    4640 main.go:141] libmachine: (multinode-985000) Downloading /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1122/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 16:20:32.920622    4640 main.go:141] libmachine: (multinode-985000) DBG | I0805 16:20:32.920524    4648 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa...
	I0805 16:20:32.957084    4640 main.go:141] libmachine: (multinode-985000) DBG | I0805 16:20:32.957005    4648 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/multinode-985000.rawdisk...
	I0805 16:20:32.957123    4640 main.go:141] libmachine: (multinode-985000) DBG | Writing magic tar header
	I0805 16:20:32.957134    4640 main.go:141] libmachine: (multinode-985000) DBG | Writing SSH key tar header
	I0805 16:20:32.957531    4640 main.go:141] libmachine: (multinode-985000) DBG | I0805 16:20:32.957490    4648 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000 ...
	I0805 16:20:33.331110    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:33.331140    4640 main.go:141] libmachine: (multinode-985000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid
	I0805 16:20:33.331159    4640 main.go:141] libmachine: (multinode-985000) DBG | Using UUID 3ac698fc-f622-443b-898d-9b152fa64288
	I0805 16:20:33.442582    4640 main.go:141] libmachine: (multinode-985000) DBG | Generated MAC e2:6:14:d2:13:ae
	I0805 16:20:33.442603    4640 main.go:141] libmachine: (multinode-985000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000
	I0805 16:20:33.442636    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3ac698fc-f622-443b-898d-9b152fa64288", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0805 16:20:33.442669    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3ac698fc-f622-443b-898d-9b152fa64288", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0805 16:20:33.442719    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "3ac698fc-f622-443b-898d-9b152fa64288", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/multinode-985000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage,/Users/jenkins/minikube-integration/1937
3-1122/.minikube/machines/multinode-985000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"}
	I0805 16:20:33.442758    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 3ac698fc-f622-443b-898d-9b152fa64288 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/multinode-985000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"
	I0805 16:20:33.442774    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:20:33.445733    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: Pid is 4651
	I0805 16:20:33.446145    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 0
	I0805 16:20:33.446167    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:33.446227    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:33.447073    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:33.447135    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:33.447152    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:33.447186    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:33.447202    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:33.447214    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:33.447222    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:33.447229    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:33.447247    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:33.447269    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:33.447287    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:33.447304    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:33.447321    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:33.453446    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:20:33.506623    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:20:33.507268    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:20:33.507283    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:20:33.507290    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:20:33.507298    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:20:33.891346    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:20:33.891387    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:20:34.006163    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:20:34.006177    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:20:34.006189    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:20:34.006208    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:20:34.007050    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:20:34.007082    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:20:35.448624    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 1
	I0805 16:20:35.448640    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:35.448724    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:35.449516    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:35.449591    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:35.449607    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:35.449619    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:35.449625    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:35.449648    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:35.449664    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:35.449695    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:35.449711    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:35.449719    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:35.449725    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:35.449731    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:35.449738    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:37.449834    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 2
	I0805 16:20:37.449851    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:37.449867    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:37.450676    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:37.450690    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:37.450697    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:37.450707    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:37.450722    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:37.450733    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:37.450744    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:37.450754    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:37.450771    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:37.450784    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:37.450797    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:37.450809    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:37.450819    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:39.451161    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 3
	I0805 16:20:39.451179    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:39.451277    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:39.452025    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:39.452066    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:39.452089    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:39.452104    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:39.452124    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:39.452141    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:39.452154    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:39.452161    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:39.452167    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:39.452183    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:39.452195    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:39.452202    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:39.452211    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:39.592041    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:39 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:20:39.592070    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:39 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:20:39.592076    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:39 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:20:39.615760    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:39 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:20:41.452210    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 4
	I0805 16:20:41.452225    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:41.452325    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:41.453101    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:41.453153    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:41.453162    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:41.453169    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:41.453178    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:41.453187    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:41.453194    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:41.453200    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:41.453219    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:41.453231    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:41.453241    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:41.453250    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:41.453258    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:43.455148    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 5
	I0805 16:20:43.455166    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:43.455244    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:43.456059    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:43.456103    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:20:43.456115    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:20:43.456122    4640 main.go:141] libmachine: (multinode-985000) DBG | Found match: e2:6:14:d2:13:ae
	I0805 16:20:43.456127    4640 main.go:141] libmachine: (multinode-985000) DBG | IP: 192.169.0.13
	I0805 16:20:43.456181    4640 main.go:141] libmachine: (multinode-985000) Calling .GetConfigRaw
	I0805 16:20:43.456781    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:43.456879    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:43.456972    4640 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 16:20:43.456985    4640 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:20:43.457082    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:43.457144    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:43.457907    4640 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 16:20:43.457917    4640 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 16:20:43.457923    4640 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 16:20:43.457927    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:43.458023    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:43.458126    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:43.458255    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:43.458346    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:43.458472    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:43.458676    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:43.458683    4640 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 16:20:44.513424    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:20:44.513443    4640 main.go:141] libmachine: Detecting the provisioner...
	I0805 16:20:44.513452    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:44.513594    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:44.513694    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.513791    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.513876    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:44.513996    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:44.514158    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:44.514165    4640 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 16:20:44.573082    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 16:20:44.573142    4640 main.go:141] libmachine: found compatible host: buildroot
	I0805 16:20:44.573149    4640 main.go:141] libmachine: Provisioning with buildroot...
	I0805 16:20:44.573155    4640 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:20:44.573299    4640 buildroot.go:166] provisioning hostname "multinode-985000"
	I0805 16:20:44.573311    4640 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:20:44.573416    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:44.573499    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:44.573585    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.573680    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.573795    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:44.573922    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:44.574068    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:44.574076    4640 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-985000 && echo "multinode-985000" | sudo tee /etc/hostname
	I0805 16:20:44.637872    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-985000
	
	I0805 16:20:44.637892    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:44.638029    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:44.638132    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.638218    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.638297    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:44.638429    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:44.638562    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:44.638582    4640 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-985000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-985000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-985000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:20:44.698340    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:20:44.698360    4640 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:20:44.698377    4640 buildroot.go:174] setting up certificates
	I0805 16:20:44.698389    4640 provision.go:84] configureAuth start
	I0805 16:20:44.698397    4640 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:20:44.698544    4640 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:20:44.698658    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:44.698750    4640 provision.go:143] copyHostCerts
	I0805 16:20:44.698781    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:20:44.698850    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:20:44.698858    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:20:44.699001    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:20:44.699205    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:20:44.699246    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:20:44.699250    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:20:44.699341    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:20:44.699482    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:20:44.699528    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:20:44.699533    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:20:44.699615    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:20:44.699756    4640 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.multinode-985000 san=[127.0.0.1 192.169.0.13 localhost minikube multinode-985000]
	I0805 16:20:45.028860    4640 provision.go:177] copyRemoteCerts
	I0805 16:20:45.028920    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:20:45.028938    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:45.029080    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:45.029180    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.029338    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:45.029452    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:20:45.063652    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:20:45.063724    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:20:45.083743    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:20:45.083800    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0805 16:20:45.103791    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:20:45.103863    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:20:45.123716    4640 provision.go:87] duration metric: took 425.312704ms to configureAuth
	I0805 16:20:45.123731    4640 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:20:45.123881    4640 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:20:45.123894    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:45.124028    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:45.124115    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:45.124206    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.124285    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.124381    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:45.124503    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:45.124632    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:45.124639    4640 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:20:45.176256    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:20:45.176269    4640 buildroot.go:70] root file system type: tmpfs
	I0805 16:20:45.176337    4640 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:20:45.176350    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:45.176482    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:45.176580    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.176695    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.176782    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:45.176911    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:45.177045    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:45.177090    4640 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:20:45.240992    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:20:45.241023    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:45.241166    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:45.241270    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.241382    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.241469    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:45.241590    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:45.241743    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:45.241755    4640 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:20:46.765402    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:20:46.765418    4640 main.go:141] libmachine: Checking connection to Docker...
	I0805 16:20:46.765424    4640 main.go:141] libmachine: (multinode-985000) Calling .GetURL
	I0805 16:20:46.765563    4640 main.go:141] libmachine: Docker is up and running!
	I0805 16:20:46.765570    4640 main.go:141] libmachine: Reticulating splines...
	I0805 16:20:46.765575    4640 client.go:171] duration metric: took 14.036043683s to LocalClient.Create
	I0805 16:20:46.765592    4640 start.go:167] duration metric: took 14.036090848s to libmachine.API.Create "multinode-985000"
	I0805 16:20:46.765602    4640 start.go:293] postStartSetup for "multinode-985000" (driver="hyperkit")
	I0805 16:20:46.765609    4640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:20:46.765620    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.765765    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:20:46.765778    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:46.765878    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:46.765972    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.766070    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:46.766168    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:20:46.808597    4640 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:20:46.814840    4640 command_runner.go:130] > NAME=Buildroot
	I0805 16:20:46.814852    4640 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0805 16:20:46.814856    4640 command_runner.go:130] > ID=buildroot
	I0805 16:20:46.814869    4640 command_runner.go:130] > VERSION_ID=2023.02.9
	I0805 16:20:46.814873    4640 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0805 16:20:46.814969    4640 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:20:46.814985    4640 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:20:46.815099    4640 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:20:46.815290    4640 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:20:46.815297    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:20:46.815526    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:20:46.832473    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:20:46.852626    4640 start.go:296] duration metric: took 87.015317ms for postStartSetup
	I0805 16:20:46.852653    4640 main.go:141] libmachine: (multinode-985000) Calling .GetConfigRaw
	I0805 16:20:46.853264    4640 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:20:46.853417    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:20:46.853762    4640 start.go:128] duration metric: took 14.156998155s to createHost
	I0805 16:20:46.853776    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:46.853870    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:46.853964    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.854078    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.854160    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:46.854284    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:46.854405    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:46.854413    4640 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 16:20:46.906137    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722900047.071906799
	
	I0805 16:20:46.906149    4640 fix.go:216] guest clock: 1722900047.071906799
	I0805 16:20:46.906154    4640 fix.go:229] Guest: 2024-08-05 16:20:47.071906799 -0700 PDT Remote: 2024-08-05 16:20:46.85377 -0700 PDT m=+14.585721958 (delta=218.136799ms)
	I0805 16:20:46.906178    4640 fix.go:200] guest clock delta is within tolerance: 218.136799ms
	I0805 16:20:46.906182    4640 start.go:83] releasing machines lock for "multinode-985000", held for 14.209573761s
	I0805 16:20:46.906200    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.906321    4640 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:20:46.906429    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.906734    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.906832    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.906917    4640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:20:46.906947    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:46.906977    4640 ssh_runner.go:195] Run: cat /version.json
	I0805 16:20:46.906987    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:46.907036    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:46.907080    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:46.907105    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.907167    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.907190    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:46.907251    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:46.907285    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:20:46.907353    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:20:46.936969    4640 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0805 16:20:46.937263    4640 ssh_runner.go:195] Run: systemctl --version
	I0805 16:20:46.992747    4640 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0805 16:20:46.993626    4640 command_runner.go:130] > systemd 252 (252)
	I0805 16:20:46.993660    4640 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0805 16:20:46.993799    4640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 16:20:46.998949    4640 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0805 16:20:46.998969    4640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:20:46.999002    4640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:20:47.012276    4640 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0805 16:20:47.012544    4640 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:20:47.012556    4640 start.go:495] detecting cgroup driver to use...
	I0805 16:20:47.012657    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:20:47.027593    4640 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0805 16:20:47.027660    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:20:47.035836    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:20:47.044911    4640 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:20:47.044968    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:20:47.053571    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:20:47.061858    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:20:47.070031    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:20:47.078524    4640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:20:47.087870    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:20:47.096303    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:20:47.104482    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:20:47.112756    4640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:20:47.120033    4640 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0805 16:20:47.120127    4640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:20:47.128644    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:47.220387    4640 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:20:47.239567    4640 start.go:495] detecting cgroup driver to use...
	I0805 16:20:47.239642    4640 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:20:47.254939    4640 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0805 16:20:47.255001    4640 command_runner.go:130] > [Unit]
	I0805 16:20:47.255011    4640 command_runner.go:130] > Description=Docker Application Container Engine
	I0805 16:20:47.255015    4640 command_runner.go:130] > Documentation=https://docs.docker.com
	I0805 16:20:47.255020    4640 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0805 16:20:47.255026    4640 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0805 16:20:47.255030    4640 command_runner.go:130] > StartLimitBurst=3
	I0805 16:20:47.255034    4640 command_runner.go:130] > StartLimitIntervalSec=60
	I0805 16:20:47.255037    4640 command_runner.go:130] > [Service]
	I0805 16:20:47.255041    4640 command_runner.go:130] > Type=notify
	I0805 16:20:47.255055    4640 command_runner.go:130] > Restart=on-failure
	I0805 16:20:47.255063    4640 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0805 16:20:47.255073    4640 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0805 16:20:47.255080    4640 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0805 16:20:47.255088    4640 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0805 16:20:47.255094    4640 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0805 16:20:47.255099    4640 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0805 16:20:47.255112    4640 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0805 16:20:47.255120    4640 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0805 16:20:47.255128    4640 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0805 16:20:47.255134    4640 command_runner.go:130] > ExecStart=
	I0805 16:20:47.255164    4640 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0805 16:20:47.255172    4640 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0805 16:20:47.255182    4640 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0805 16:20:47.255189    4640 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0805 16:20:47.255193    4640 command_runner.go:130] > LimitNOFILE=infinity
	I0805 16:20:47.255196    4640 command_runner.go:130] > LimitNPROC=infinity
	I0805 16:20:47.255200    4640 command_runner.go:130] > LimitCORE=infinity
	I0805 16:20:47.255205    4640 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0805 16:20:47.255209    4640 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0805 16:20:47.255212    4640 command_runner.go:130] > TasksMax=infinity
	I0805 16:20:47.255215    4640 command_runner.go:130] > TimeoutStartSec=0
	I0805 16:20:47.255220    4640 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0805 16:20:47.255225    4640 command_runner.go:130] > Delegate=yes
	I0805 16:20:47.255230    4640 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0805 16:20:47.255233    4640 command_runner.go:130] > KillMode=process
	I0805 16:20:47.255236    4640 command_runner.go:130] > [Install]
	I0805 16:20:47.255259    4640 command_runner.go:130] > WantedBy=multi-user.target
	I0805 16:20:47.255324    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:20:47.269909    4640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:20:47.286027    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:20:47.296365    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:20:47.306405    4640 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:20:47.369760    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:20:47.379998    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:20:47.394696    4640 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0805 16:20:47.394951    4640 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:20:47.397850    4640 command_runner.go:130] > /usr/bin/cri-dockerd
	I0805 16:20:47.398038    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:20:47.406063    4640 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:20:47.419537    4640 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:20:47.514227    4640 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:20:47.637079    4640 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:20:47.637156    4640 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:20:47.651314    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:47.748259    4640 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:20:50.076345    4640 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.32806615s)
	I0805 16:20:50.076407    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0805 16:20:50.086580    4640 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0805 16:20:50.099944    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:20:50.110410    4640 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0805 16:20:50.206329    4640 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0805 16:20:50.317239    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:50.417670    4640 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0805 16:20:50.431617    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:20:50.443305    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:50.555307    4640 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0805 16:20:50.610408    4640 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0805 16:20:50.610481    4640 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0805 16:20:50.614751    4640 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0805 16:20:50.614762    4640 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0805 16:20:50.614767    4640 command_runner.go:130] > Device: 0,22	Inode: 806         Links: 1
	I0805 16:20:50.614772    4640 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0805 16:20:50.614775    4640 command_runner.go:130] > Access: 2024-08-05 23:20:50.735793184 +0000
	I0805 16:20:50.614784    4640 command_runner.go:130] > Modify: 2024-08-05 23:20:50.735793184 +0000
	I0805 16:20:50.614789    4640 command_runner.go:130] > Change: 2024-08-05 23:20:50.736793062 +0000
	I0805 16:20:50.614792    4640 command_runner.go:130] >  Birth: -
	I0805 16:20:50.614829    4640 start.go:563] Will wait 60s for crictl version
	I0805 16:20:50.614890    4640 ssh_runner.go:195] Run: which crictl
	I0805 16:20:50.617807    4640 command_runner.go:130] > /usr/bin/crictl
	I0805 16:20:50.617933    4640 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 16:20:50.644026    4640 command_runner.go:130] > Version:  0.1.0
	I0805 16:20:50.644070    4640 command_runner.go:130] > RuntimeName:  docker
	I0805 16:20:50.644117    4640 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0805 16:20:50.644195    4640 command_runner.go:130] > RuntimeApiVersion:  v1
	I0805 16:20:50.645396    4640 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0805 16:20:50.645460    4640 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:20:50.661131    4640 command_runner.go:130] > 27.1.1
	I0805 16:20:50.662194    4640 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:20:50.677860    4640 command_runner.go:130] > 27.1.1
	I0805 16:20:50.700872    4640 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0805 16:20:50.700922    4640 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:20:50.701316    4640 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0805 16:20:50.706154    4640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:20:50.715610    4640 kubeadm.go:883] updating cluster {Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 16:20:50.715677    4640 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:20:50.715736    4640 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:20:50.733572    4640 docker.go:685] Got preloaded images: 
	I0805 16:20:50.733584    4640 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0805 16:20:50.733634    4640 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 16:20:50.741005    4640 command_runner.go:139] > {"Repositories":{}}
	I0805 16:20:50.741090    4640 ssh_runner.go:195] Run: which lz4
	I0805 16:20:50.744527    4640 command_runner.go:130] > /usr/bin/lz4
	I0805 16:20:50.744558    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0805 16:20:50.744692    4640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 16:20:50.747718    4640 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 16:20:50.747836    4640 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 16:20:50.747851    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0805 16:20:51.865752    4640 docker.go:649] duration metric: took 1.121114736s to copy over tarball
	I0805 16:20:51.865833    4640 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 16:20:54.241811    4640 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.375959074s)
	I0805 16:20:54.241825    4640 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 16:20:54.267125    4640 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 16:20:54.275283    4640 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.3":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.3":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.3":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d2
89d99da794784d1"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.3":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0805 16:20:54.275373    4640 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0805 16:20:54.288931    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:54.386395    4640 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:20:56.795159    4640 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.408741228s)
	I0805 16:20:56.795248    4640 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:20:56.808093    4640 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0805 16:20:56.808107    4640 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0805 16:20:56.808111    4640 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0805 16:20:56.808116    4640 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0805 16:20:56.808120    4640 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0805 16:20:56.808123    4640 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0805 16:20:56.808128    4640 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0805 16:20:56.808135    4640 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:20:56.809018    4640 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0805 16:20:56.809035    4640 cache_images.go:84] Images are preloaded, skipping loading
	I0805 16:20:56.809048    4640 kubeadm.go:934] updating node { 192.169.0.13 8443 v1.30.3 docker true true} ...
	I0805 16:20:56.809127    4640 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-985000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 16:20:56.809195    4640 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0805 16:20:56.847007    4640 command_runner.go:130] > cgroupfs
	I0805 16:20:56.847610    4640 cni.go:84] Creating CNI manager for ""
	I0805 16:20:56.847620    4640 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 16:20:56.847630    4640 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 16:20:56.847650    4640 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.13 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-985000 NodeName:multinode-985000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 16:20:56.847744    4640 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-985000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 16:20:56.847807    4640 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 16:20:56.855919    4640 command_runner.go:130] > kubeadm
	I0805 16:20:56.855931    4640 command_runner.go:130] > kubectl
	I0805 16:20:56.855934    4640 command_runner.go:130] > kubelet
	I0805 16:20:56.855959    4640 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 16:20:56.856010    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 16:20:56.863284    4640 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0805 16:20:56.876753    4640 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 16:20:56.890292    4640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0805 16:20:56.904628    4640 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I0805 16:20:56.907711    4640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:20:56.917108    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:57.013172    4640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:20:57.028650    4640 certs.go:68] Setting up /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000 for IP: 192.169.0.13
	I0805 16:20:57.028663    4640 certs.go:194] generating shared ca certs ...
	I0805 16:20:57.028674    4640 certs.go:226] acquiring lock for ca certs: {Name:mkb83e058d89c7d4e66f4136f377a3c305b13735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.028863    4640 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key
	I0805 16:20:57.028935    4640 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key
	I0805 16:20:57.028946    4640 certs.go:256] generating profile certs ...
	I0805 16:20:57.028995    4640 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key
	I0805 16:20:57.029007    4640 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt with IP's: []
	I0805 16:20:57.088127    4640 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt ...
	I0805 16:20:57.088142    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt: {Name:mkb7087fa165ae496621b10df42dfd2f8603360a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.088531    4640 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key ...
	I0805 16:20:57.088540    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key: {Name:mk37e627de9c39a2300d317d721ebf92a202a17e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.088775    4640 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec
	I0805 16:20:57.088790    4640 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt.5b7978ec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.13]
	I0805 16:20:57.189318    4640 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt.5b7978ec ...
	I0805 16:20:57.189336    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt.5b7978ec: {Name:mkb4501af4f6db766eb719de2f42fc564a23d2d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.189653    4640 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec ...
	I0805 16:20:57.189669    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec: {Name:mke641ddecfc5629bb592a5b6321d446ed3b31bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.189903    4640 certs.go:381] copying /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt.5b7978ec -> /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt
	I0805 16:20:57.190140    4640 certs.go:385] copying /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec -> /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key
	I0805 16:20:57.190318    4640 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key
	I0805 16:20:57.190336    4640 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt with IP's: []
	I0805 16:20:57.386717    4640 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt ...
	I0805 16:20:57.386733    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt: {Name:mk486344c8c5b8383e5349f68a995b553e8d31c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.387043    4640 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key ...
	I0805 16:20:57.387052    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key: {Name:mk2b24e1a5e962e12395adf21e4f6ad64901ee0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.387278    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 16:20:57.387306    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 16:20:57.387325    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 16:20:57.387349    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 16:20:57.387368    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 16:20:57.387391    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 16:20:57.387411    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 16:20:57.387432    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 16:20:57.387531    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem (1338 bytes)
	W0805 16:20:57.387583    4640 certs.go:480] ignoring /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678_empty.pem, impossibly tiny 0 bytes
	I0805 16:20:57.387591    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 16:20:57.387621    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem (1082 bytes)
	I0805 16:20:57.387656    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem (1123 bytes)
	I0805 16:20:57.387684    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem (1675 bytes)
	I0805 16:20:57.387747    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:20:57.387781    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem -> /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.387803    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.387822    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.388188    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 16:20:57.408800    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 16:20:57.429927    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 16:20:57.449924    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0805 16:20:57.470736    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0805 16:20:57.490564    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 16:20:57.511342    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 16:20:57.531190    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 16:20:57.551984    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem --> /usr/share/ca-certificates/1678.pem (1338 bytes)
	I0805 16:20:57.571601    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /usr/share/ca-certificates/16782.pem (1708 bytes)
	I0805 16:20:57.592369    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 16:20:57.611866    4640 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 16:20:57.626527    4640 ssh_runner.go:195] Run: openssl version
	I0805 16:20:57.630504    4640 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0805 16:20:57.630711    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1678.pem && ln -fs /usr/share/ca-certificates/1678.pem /etc/ssl/certs/1678.pem"
	I0805 16:20:57.638913    4640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.642115    4640 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  5 22:58 /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.642280    4640 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 22:58 /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.642315    4640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.646345    4640 command_runner.go:130] > 51391683
	I0805 16:20:57.646544    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1678.pem /etc/ssl/certs/51391683.0"
	I0805 16:20:57.654953    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16782.pem && ln -fs /usr/share/ca-certificates/16782.pem /etc/ssl/certs/16782.pem"
	I0805 16:20:57.663842    4640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.667242    4640 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  5 22:58 /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.667258    4640 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 22:58 /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.667300    4640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.671438    4640 command_runner.go:130] > 3ec20f2e
	I0805 16:20:57.671648    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16782.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 16:20:57.679692    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 16:20:57.688061    4640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.691411    4640 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.691493    4640 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.691531    4640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.695572    4640 command_runner.go:130] > b5213941
	I0805 16:20:57.695754    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 16:20:57.704703    4640 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 16:20:57.707752    4640 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 16:20:57.707872    4640 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 16:20:57.707921    4640 kubeadm.go:392] StartCluster: {Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:20:57.708054    4640 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 16:20:57.720408    4640 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 16:20:57.731114    4640 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0805 16:20:57.731128    4640 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0805 16:20:57.731133    4640 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0805 16:20:57.731194    4640 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 16:20:57.739645    4640 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 16:20:57.751095    4640 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0805 16:20:57.751108    4640 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0805 16:20:57.751113    4640 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0805 16:20:57.751120    4640 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 16:20:57.751266    4640 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 16:20:57.751273    4640 kubeadm.go:157] found existing configuration files:
	
	I0805 16:20:57.751324    4640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 16:20:57.759086    4640 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 16:20:57.759185    4640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 16:20:57.759233    4640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 16:20:57.769060    4640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 16:20:57.778103    4640 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 16:20:57.778143    4640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 16:20:57.778190    4640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 16:20:57.786612    4640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 16:20:57.794733    4640 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 16:20:57.794754    4640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 16:20:57.794796    4640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 16:20:57.802671    4640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 16:20:57.810242    4640 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 16:20:57.810264    4640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 16:20:57.810299    4640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 16:20:57.818339    4640 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 16:20:57.890449    4640 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0805 16:20:57.890461    4640 command_runner.go:130] > [init] Using Kubernetes version: v1.30.3
	I0805 16:20:57.890501    4640 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 16:20:57.890507    4640 command_runner.go:130] > [preflight] Running pre-flight checks
	I0805 16:20:57.984851    4640 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 16:20:57.984855    4640 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 16:20:57.984956    4640 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 16:20:57.984962    4640 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 16:20:57.985041    4640 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 16:20:57.985038    4640 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 16:20:58.152965    4640 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 16:20:58.152995    4640 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 16:20:58.175785    4640 out.go:204]   - Generating certificates and keys ...
	I0805 16:20:58.175840    4640 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0805 16:20:58.175851    4640 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 16:20:58.175914    4640 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0805 16:20:58.175920    4640 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 16:20:58.229002    4640 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0805 16:20:58.229016    4640 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0805 16:20:58.322701    4640 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0805 16:20:58.322717    4640 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0805 16:20:58.394063    4640 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0805 16:20:58.394077    4640 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0805 16:20:58.601975    4640 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0805 16:20:58.601995    4640 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0805 16:20:58.821056    4640 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0805 16:20:58.821065    4640 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0805 16:20:58.821204    4640 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-985000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0805 16:20:58.821214    4640 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-985000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0805 16:20:59.150811    4640 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0805 16:20:59.150817    4640 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0805 16:20:59.151036    4640 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-985000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0805 16:20:59.151046    4640 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-985000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0805 16:20:59.206073    4640 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0805 16:20:59.206088    4640 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0805 16:20:59.294956    4640 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0805 16:20:59.294966    4640 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0805 16:20:59.348591    4640 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0805 16:20:59.348602    4640 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0805 16:20:59.348788    4640 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 16:20:59.348797    4640 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 16:20:59.511379    4640 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 16:20:59.511395    4640 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 16:20:59.789652    4640 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 16:20:59.789666    4640 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 16:20:59.965508    4640 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 16:20:59.965517    4640 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 16:21:00.208268    4640 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 16:21:00.208284    4640 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 16:21:00.402575    4640 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 16:21:00.402582    4640 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 16:21:00.409122    4640 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 16:21:00.409137    4640 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 16:21:00.410639    4640 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 16:21:00.410652    4640 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 16:21:00.430944    4640 out.go:204]   - Booting up control plane ...
	I0805 16:21:00.431017    4640 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 16:21:00.431032    4640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 16:21:00.431106    4640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 16:21:00.431106    4640 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 16:21:00.431174    4640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 16:21:00.431182    4640 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 16:21:00.431274    4640 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 16:21:00.431286    4640 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 16:21:00.431361    4640 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 16:21:00.431369    4640 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 16:21:00.431399    4640 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 16:21:00.431405    4640 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0805 16:21:00.540991    4640 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 16:21:00.541004    4640 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 16:21:00.541076    4640 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 16:21:00.541081    4640 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 16:21:01.042556    4640 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.719164ms
	I0805 16:21:01.042573    4640 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.719164ms
	I0805 16:21:01.042632    4640 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 16:21:01.042639    4640 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 16:21:05.541995    4640 kubeadm.go:310] [api-check] The API server is healthy after 4.502407968s
	I0805 16:21:05.542014    4640 command_runner.go:130] > [api-check] The API server is healthy after 4.502407968s
	I0805 16:21:05.551474    4640 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 16:21:05.551486    4640 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 16:21:05.558278    4640 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 16:21:05.558284    4640 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 16:21:05.572116    4640 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 16:21:05.572130    4640 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0805 16:21:05.572281    4640 kubeadm.go:310] [mark-control-plane] Marking the node multinode-985000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 16:21:05.572292    4640 command_runner.go:130] > [mark-control-plane] Marking the node multinode-985000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 16:21:05.579214    4640 kubeadm.go:310] [bootstrap-token] Using token: 0mwls8.ribzsy6ooov2flu0
	I0805 16:21:05.579225    4640 command_runner.go:130] > [bootstrap-token] Using token: 0mwls8.ribzsy6ooov2flu0
	I0805 16:21:05.613851    4640 out.go:204]   - Configuring RBAC rules ...
	I0805 16:21:05.613974    4640 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 16:21:05.613988    4640 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 16:21:05.655317    4640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 16:21:05.655329    4640 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 16:21:05.659733    4640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 16:21:05.659737    4640 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 16:21:05.661608    4640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 16:21:05.661619    4640 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 16:21:05.663605    4640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 16:21:05.663612    4640 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 16:21:05.665771    4640 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 16:21:05.665778    4640 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 16:21:05.947572    4640 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 16:21:05.947585    4640 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 16:21:06.357765    4640 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 16:21:06.357776    4640 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0805 16:21:06.946930    4640 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 16:21:06.946942    4640 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0805 16:21:06.947937    4640 kubeadm.go:310] 
	I0805 16:21:06.947989    4640 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 16:21:06.947996    4640 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0805 16:21:06.948000    4640 kubeadm.go:310] 
	I0805 16:21:06.948071    4640 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 16:21:06.948080    4640 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0805 16:21:06.948088    4640 kubeadm.go:310] 
	I0805 16:21:06.948121    4640 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 16:21:06.948125    4640 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0805 16:21:06.948179    4640 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 16:21:06.948187    4640 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 16:21:06.948229    4640 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 16:21:06.948234    4640 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 16:21:06.948237    4640 kubeadm.go:310] 
	I0805 16:21:06.948284    4640 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 16:21:06.948302    4640 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0805 16:21:06.948309    4640 kubeadm.go:310] 
	I0805 16:21:06.948354    4640 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 16:21:06.948367    4640 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 16:21:06.948375    4640 kubeadm.go:310] 
	I0805 16:21:06.948414    4640 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 16:21:06.948418    4640 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0805 16:21:06.948479    4640 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 16:21:06.948488    4640 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 16:21:06.948558    4640 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 16:21:06.948564    4640 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 16:21:06.948570    4640 kubeadm.go:310] 
	I0805 16:21:06.948633    4640 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 16:21:06.948638    4640 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0805 16:21:06.948701    4640 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 16:21:06.948708    4640 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0805 16:21:06.948715    4640 kubeadm.go:310] 
	I0805 16:21:06.948788    4640 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0mwls8.ribzsy6ooov2flu0 \
	I0805 16:21:06.948795    4640 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 0mwls8.ribzsy6ooov2flu0 \
	I0805 16:21:06.948879    4640 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:524477c6809305b6c0c2d082a15767bdfc04953bf05f4ba28f6a5db30aba8adf \
	I0805 16:21:06.948886    4640 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:524477c6809305b6c0c2d082a15767bdfc04953bf05f4ba28f6a5db30aba8adf \
	I0805 16:21:06.948905    4640 kubeadm.go:310] 	--control-plane 
	I0805 16:21:06.948911    4640 command_runner.go:130] > 	--control-plane 
	I0805 16:21:06.948916    4640 kubeadm.go:310] 
	I0805 16:21:06.948980    4640 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 16:21:06.948984    4640 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0805 16:21:06.948987    4640 kubeadm.go:310] 
	I0805 16:21:06.949052    4640 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0mwls8.ribzsy6ooov2flu0 \
	I0805 16:21:06.949057    4640 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 0mwls8.ribzsy6ooov2flu0 \
	I0805 16:21:06.949136    4640 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:524477c6809305b6c0c2d082a15767bdfc04953bf05f4ba28f6a5db30aba8adf 
	I0805 16:21:06.949141    4640 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:524477c6809305b6c0c2d082a15767bdfc04953bf05f4ba28f6a5db30aba8adf 
	I0805 16:21:06.949613    4640 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 16:21:06.949621    4640 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 16:21:06.949644    4640 cni.go:84] Creating CNI manager for ""
	I0805 16:21:06.949649    4640 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 16:21:06.972147    4640 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0805 16:21:07.030449    4640 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0805 16:21:07.036220    4640 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0805 16:21:07.036233    4640 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0805 16:21:07.036239    4640 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0805 16:21:07.036249    4640 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0805 16:21:07.036254    4640 command_runner.go:130] > Access: 2024-08-05 23:20:43.694299549 +0000
	I0805 16:21:07.036259    4640 command_runner.go:130] > Modify: 2024-07-29 16:10:03.000000000 +0000
	I0805 16:21:07.036264    4640 command_runner.go:130] > Change: 2024-08-05 23:20:41.058596444 +0000
	I0805 16:21:07.036266    4640 command_runner.go:130] >  Birth: -
	I0805 16:21:07.036368    4640 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0805 16:21:07.036375    4640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0805 16:21:07.050414    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0805 16:21:07.243070    4640 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0805 16:21:07.246445    4640 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0805 16:21:07.250670    4640 command_runner.go:130] > serviceaccount/kindnet created
	I0805 16:21:07.255971    4640 command_runner.go:130] > daemonset.apps/kindnet created
	I0805 16:21:07.257424    4640 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 16:21:07.257500    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-985000 minikube.k8s.io/updated_at=2024_08_05T16_21_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4 minikube.k8s.io/name=multinode-985000 minikube.k8s.io/primary=true
	I0805 16:21:07.257502    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:07.266956    4640 command_runner.go:130] > -16
	I0805 16:21:07.267023    4640 ops.go:34] apiserver oom_adj: -16
	I0805 16:21:07.390396    4640 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0805 16:21:07.392070    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:07.400579    4640 command_runner.go:130] > node/multinode-985000 labeled
	I0805 16:21:07.456213    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:07.893323    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:07.956622    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:08.392391    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:08.450793    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:08.892411    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:08.950456    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:09.393238    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:09.450291    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:09.892156    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:09.951159    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:10.393019    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:10.451734    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:10.893100    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:10.954360    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:11.393009    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:11.452879    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:11.894187    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:11.953480    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:12.392194    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:12.452444    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:12.894265    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:12.955367    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:13.392882    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:13.455680    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:13.892568    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:13.950195    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:14.393254    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:14.452940    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:14.892187    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:14.948447    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:15.392762    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:15.451815    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:15.892531    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:15.952781    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:16.393008    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:16.454659    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:16.892423    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:16.957989    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:17.392489    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:17.452653    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:17.892453    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:17.953809    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:18.392692    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:18.450726    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:18.893940    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:18.957266    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:19.393402    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:19.452345    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:19.892761    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:19.952524    4640 command_runner.go:130] > NAME      SECRETS   AGE
	I0805 16:21:19.952537    4640 command_runner.go:130] > default   0         1s
	I0805 16:21:19.952551    4640 kubeadm.go:1113] duration metric: took 12.695106906s to wait for elevateKubeSystemPrivileges
	I0805 16:21:19.952568    4640 kubeadm.go:394] duration metric: took 22.244643678s to StartCluster
	I0805 16:21:19.952584    4640 settings.go:142] acquiring lock: {Name:mk564a817a54ecf2aef16a4d2309e85208c0231f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:21:19.952678    4640 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:21:19.953130    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/kubeconfig: {Name:mk2a0d8b4d330b3c26432fc65d015ddf98a9cc93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:21:19.953387    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0805 16:21:19.953391    4640 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:21:19.953437    4640 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 16:21:19.953474    4640 addons.go:69] Setting storage-provisioner=true in profile "multinode-985000"
	I0805 16:21:19.953501    4640 addons.go:234] Setting addon storage-provisioner=true in "multinode-985000"
	I0805 16:21:19.953507    4640 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:21:19.953501    4640 addons.go:69] Setting default-storageclass=true in profile "multinode-985000"
	I0805 16:21:19.953520    4640 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:21:19.953542    4640 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-985000"
	I0805 16:21:19.953772    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.953787    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.953870    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.953897    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.962985    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52500
	I0805 16:21:19.963341    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52502
	I0805 16:21:19.963365    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.963645    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.963722    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.963735    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.963997    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.964004    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.964027    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.964249    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.964372    4640 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:21:19.964430    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.964458    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:19.964465    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.964535    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:21:19.966651    4640 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:21:19.966874    4640 kapi.go:59] client config for multinode-985000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xed05060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:21:19.967275    4640 cert_rotation.go:137] Starting client certificate rotation controller
	I0805 16:21:19.967411    4640 addons.go:234] Setting addon default-storageclass=true in "multinode-985000"
	I0805 16:21:19.967434    4640 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:21:19.967665    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.967688    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.973226    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52504
	I0805 16:21:19.973568    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.973922    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.973942    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.974163    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.974282    4640 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:21:19.974363    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:19.974444    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:21:19.975405    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:21:19.975491    4640 out.go:177] * Verifying Kubernetes components...
	I0805 16:21:19.976182    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52506
	I0805 16:21:19.976461    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.976795    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.976812    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.976999    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.977392    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.977409    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.986027    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52508
	I0805 16:21:19.986361    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.986712    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.986741    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.986959    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.987071    4640 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:21:19.987149    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:19.987227    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:21:19.988179    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:21:19.988299    4640 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 16:21:19.988307    4640 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 16:21:19.988315    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:21:19.988395    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:21:19.988484    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:21:19.988568    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:21:19.988639    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:21:20.032241    4640 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:21:20.032361    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:21:20.069496    4640 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 16:21:20.069510    4640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 16:21:20.069530    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:21:20.069717    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:21:20.069824    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:21:20.069935    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:21:20.070041    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:21:20.084762    4640 command_runner.go:130] > apiVersion: v1
	I0805 16:21:20.084775    4640 command_runner.go:130] > data:
	I0805 16:21:20.084779    4640 command_runner.go:130] >   Corefile: |
	I0805 16:21:20.084782    4640 command_runner.go:130] >     .:53 {
	I0805 16:21:20.084785    4640 command_runner.go:130] >         errors
	I0805 16:21:20.084790    4640 command_runner.go:130] >         health {
	I0805 16:21:20.084794    4640 command_runner.go:130] >            lameduck 5s
	I0805 16:21:20.084796    4640 command_runner.go:130] >         }
	I0805 16:21:20.084812    4640 command_runner.go:130] >         ready
	I0805 16:21:20.084822    4640 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0805 16:21:20.084829    4640 command_runner.go:130] >            pods insecure
	I0805 16:21:20.084833    4640 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0805 16:21:20.084841    4640 command_runner.go:130] >            ttl 30
	I0805 16:21:20.084853    4640 command_runner.go:130] >         }
	I0805 16:21:20.084863    4640 command_runner.go:130] >         prometheus :9153
	I0805 16:21:20.084868    4640 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0805 16:21:20.084880    4640 command_runner.go:130] >            max_concurrent 1000
	I0805 16:21:20.084884    4640 command_runner.go:130] >         }
	I0805 16:21:20.084887    4640 command_runner.go:130] >         cache 30
	I0805 16:21:20.084898    4640 command_runner.go:130] >         loop
	I0805 16:21:20.084902    4640 command_runner.go:130] >         reload
	I0805 16:21:20.084905    4640 command_runner.go:130] >         loadbalance
	I0805 16:21:20.084908    4640 command_runner.go:130] >     }
	I0805 16:21:20.084911    4640 command_runner.go:130] > kind: ConfigMap
	I0805 16:21:20.084914    4640 command_runner.go:130] > metadata:
	I0805 16:21:20.084921    4640 command_runner.go:130] >   creationTimestamp: "2024-08-05T23:21:06Z"
	I0805 16:21:20.084926    4640 command_runner.go:130] >   name: coredns
	I0805 16:21:20.084929    4640 command_runner.go:130] >   namespace: kube-system
	I0805 16:21:20.084933    4640 command_runner.go:130] >   resourceVersion: "266"
	I0805 16:21:20.084937    4640 command_runner.go:130] >   uid: 5057af03-8824-4e67-a4b6-ef90c1ded7ce
	I0805 16:21:20.085056    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0805 16:21:20.184335    4640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:21:20.203408    4640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 16:21:20.278639    4640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 16:21:20.507141    4640 command_runner.go:130] > configmap/coredns replaced
	I0805 16:21:20.511660    4640 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0805 16:21:20.511929    4640 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:21:20.511932    4640 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:21:20.512124    4640 kapi.go:59] client config for multinode-985000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xed05060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:21:20.512125    4640 kapi.go:59] client config for multinode-985000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xed05060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:21:20.512341    4640 node_ready.go:35] waiting up to 6m0s for node "multinode-985000" to be "Ready" ...
	I0805 16:21:20.512409    4640 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0805 16:21:20.512416    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.512423    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.512424    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:20.512428    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.512430    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.512438    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.512446    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.520076    4640 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0805 16:21:20.520087    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.520092    4640 round_trippers.go:580]     Audit-Id: 304f14c4-a466-4fb6-b401-b28f4df4dfa1
	I0805 16:21:20.520095    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.520103    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.520107    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.520111    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.520113    4640 round_trippers.go:580]     Content-Length: 291
	I0805 16:21:20.520117    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.521443    4640 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0805 16:21:20.521456    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.521464    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.521474    4640 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7bdcac2f-ecae-4bb5-9dd4-4f2479d63a63","resourceVersion":"381","creationTimestamp":"2024-08-05T23:21:06Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0805 16:21:20.521479    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.521487    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.521502    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.521511    4640 round_trippers.go:580]     Audit-Id: bcd9e393-6b08-4ffb-a73b-6e7c430f0212
	I0805 16:21:20.521518    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.521831    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:20.521865    4640 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7bdcac2f-ecae-4bb5-9dd4-4f2479d63a63","resourceVersion":"381","creationTimestamp":"2024-08-05T23:21:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0805 16:21:20.521904    4640 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0805 16:21:20.521914    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.521921    4640 round_trippers.go:473]     Content-Type: application/json
	I0805 16:21:20.521930    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.521935    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.530726    4640 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0805 16:21:20.530739    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.530744    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.530748    4640 round_trippers.go:580]     Content-Length: 291
	I0805 16:21:20.530751    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.530754    4640 round_trippers.go:580]     Audit-Id: ba15a3b2-b69b-473e-a331-81e01385ad47
	I0805 16:21:20.530756    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.530758    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.530761    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.530773    4640 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7bdcac2f-ecae-4bb5-9dd4-4f2479d63a63","resourceVersion":"383","creationTimestamp":"2024-08-05T23:21:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0805 16:21:20.588534    4640 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0805 16:21:20.588563    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.588570    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.588737    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.588752    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.588765    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.588764    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.588772    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.588919    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.588920    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.588931    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.589012    4640 round_trippers.go:463] GET https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses
	I0805 16:21:20.589020    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.589028    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.589034    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.597496    4640 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0805 16:21:20.597508    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.597513    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.597518    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.597521    4640 round_trippers.go:580]     Content-Length: 1273
	I0805 16:21:20.597523    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.597525    4640 round_trippers.go:580]     Audit-Id: d7394cfc-1eb3-4623-8a7f-a5088a0398c8
	I0805 16:21:20.597527    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.597530    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.597844    4640 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"391"},"items":[{"metadata":{"name":"standard","uid":"34b9c98b-1b12-420a-8576-fd00c496f57b","resourceVersion":"387","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0805 16:21:20.598117    4640 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"34b9c98b-1b12-420a-8576-fd00c496f57b","resourceVersion":"387","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0805 16:21:20.598145    4640 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0805 16:21:20.598150    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.598157    4640 round_trippers.go:473]     Content-Type: application/json
	I0805 16:21:20.598166    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.598171    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.619819    4640 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0805 16:21:20.619836    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.619842    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.619846    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.619849    4640 round_trippers.go:580]     Content-Length: 1220
	I0805 16:21:20.619852    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.619855    4640 round_trippers.go:580]     Audit-Id: 299d4cc8-0cb5-4dd5-80b3-5d54592ecd90
	I0805 16:21:20.619859    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.619861    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.619898    4640 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"34b9c98b-1b12-420a-8576-fd00c496f57b","resourceVersion":"387","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0805 16:21:20.619983    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.619992    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.620141    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.620153    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.620166    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.750372    4640 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0805 16:21:20.753871    4640 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0805 16:21:20.759257    4640 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0805 16:21:20.767575    4640 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0805 16:21:20.774745    4640 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0805 16:21:20.786454    4640 command_runner.go:130] > pod/storage-provisioner created
	I0805 16:21:20.787838    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.787851    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.788087    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.788087    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.788098    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.788109    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.788117    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.788261    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.788280    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.788280    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.811467    4640 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0805 16:21:20.871433    4640 addons.go:510] duration metric: took 917.995637ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0805 16:21:21.014507    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:21.014532    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:21.014545    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:21.014553    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:21.014605    4640 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0805 16:21:21.014619    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:21.014631    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:21.014638    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:21.017465    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:21.017464    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:21.017480    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:21.017492    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:21.017492    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:21.017496    4640 round_trippers.go:580]     Content-Length: 291
	I0805 16:21:21.017502    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:21 GMT
	I0805 16:21:21.017504    4640 round_trippers.go:580]     Audit-Id: fb264fed-80ee-469b-a34e-7b1e8460f94b
	I0805 16:21:21.017506    4640 round_trippers.go:580]     Audit-Id: c9362211-8dfc-4385-87db-76c6486df53e
	I0805 16:21:21.017512    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:21.017513    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:21.017518    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:21.017519    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:21.017522    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:21.017524    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:21.017529    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:21.017545    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:21 GMT
	I0805 16:21:21.017616    4640 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7bdcac2f-ecae-4bb5-9dd4-4f2479d63a63","resourceVersion":"395","creationTimestamp":"2024-08-05T23:21:06Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0805 16:21:21.017684    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:21.017735    4640 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-985000" context rescaled to 1 replicas
	I0805 16:21:21.514170    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:21.514200    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:21.514219    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:21.514226    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:21.516804    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:21.516819    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:21.516826    4640 round_trippers.go:580]     Audit-Id: 9396255c-231d-48cb-a53f-22663307b969
	I0805 16:21:21.516830    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:21.516834    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:21.516839    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:21.516849    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:21.516854    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:21 GMT
	I0805 16:21:21.516951    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:22.013275    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:22.013299    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:22.013311    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:22.013319    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:22.016138    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:22.016155    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:22.016163    4640 round_trippers.go:580]     Audit-Id: cc869aef-9ab4-4a7f-8835-cce2afa76dd9
	I0805 16:21:22.016168    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:22.016175    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:22.016182    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:22.016187    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:22.016193    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:22 GMT
	I0805 16:21:22.016497    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:22.512546    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:22.512561    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:22.512567    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:22.512572    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:22.515381    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:22.515393    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:22.515401    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:22.515407    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:22.515412    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:22.515416    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:22 GMT
	I0805 16:21:22.515420    4640 round_trippers.go:580]     Audit-Id: e7d470a0-7df5-4d85-9bb5-cbf15cfa989f
	I0805 16:21:22.515423    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:22.515634    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:22.515838    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:23.012594    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:23.012606    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:23.012612    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:23.012616    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:23.014085    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:23.014095    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:23.014101    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:23.014104    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:23.014107    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:23.014109    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:23.014113    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:23 GMT
	I0805 16:21:23.014116    4640 round_trippers.go:580]     Audit-Id: e12d5034-3bd9-498b-844e-12133805ded9
	I0805 16:21:23.014306    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:23.513150    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:23.513163    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:23.513168    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:23.513172    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:23.514595    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:23.514604    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:23.514610    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:23.514614    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:23.514617    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:23.514619    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:23.514622    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:23 GMT
	I0805 16:21:23.514635    4640 round_trippers.go:580]     Audit-Id: 2bc52e3b-1575-453f-87fa-51f4301a9426
	I0805 16:21:23.514871    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:24.012814    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:24.012826    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:24.012832    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:24.012835    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:24.014366    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:24.014379    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:24.014384    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:24.014388    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:24.014406    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:24.014411    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:24.014414    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:24 GMT
	I0805 16:21:24.014417    4640 round_trippers.go:580]     Audit-Id: f14d8611-e5e1-45fe-92f3-95559148c71b
	I0805 16:21:24.014572    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:24.513607    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:24.513620    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:24.513626    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:24.513629    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:24.515210    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:24.515220    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:24.515242    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:24.515253    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:24.515260    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:24.515264    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:24.515268    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:24 GMT
	I0805 16:21:24.515271    4640 round_trippers.go:580]     Audit-Id: 0a897d84-d437-4212-b36d-e414fedf55d4
	I0805 16:21:24.515427    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:25.013253    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:25.013272    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:25.013283    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:25.013321    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:25.015275    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:25.015308    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:25.015317    4640 round_trippers.go:580]     Audit-Id: ced7b45c-a072-4322-89ab-d0cc21ddfb1d
	I0805 16:21:25.015322    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:25.015325    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:25.015328    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:25.015332    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:25.015336    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:25 GMT
	I0805 16:21:25.015627    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:25.015849    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:25.512881    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:25.512902    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:25.512914    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:25.512920    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:25.515502    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:25.515517    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:25.515524    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:25.515529    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:25.515534    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:25.515538    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:25.515542    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:25 GMT
	I0805 16:21:25.515545    4640 round_trippers.go:580]     Audit-Id: dd6b59c1-dde3-4d67-b446-8823ad717d4f
	I0805 16:21:25.515665    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:26.013787    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:26.013811    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:26.013824    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:26.013830    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:26.016420    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:26.016440    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:26.016463    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:26 GMT
	I0805 16:21:26.016470    4640 round_trippers.go:580]     Audit-Id: 19939705-2879-44e6-830c-0c86394087ed
	I0805 16:21:26.016473    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:26.016485    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:26.016490    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:26.016494    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:26.016965    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:26.512523    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:26.512536    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:26.512541    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:26.512544    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:26.514158    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:26.514167    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:26.514172    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:26.514176    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:26.514179    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:26.514182    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:26 GMT
	I0805 16:21:26.514184    4640 round_trippers.go:580]     Audit-Id: f2346665-2701-41e1-94b0-41a70aa2f170
	I0805 16:21:26.514187    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:26.514489    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:27.013107    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:27.013136    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:27.013148    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:27.013155    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:27.015615    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:27.015632    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:27.015639    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:27 GMT
	I0805 16:21:27.015655    4640 round_trippers.go:580]     Audit-Id: 6abee22d-c1db-48e9-99db-e07791ed571f
	I0805 16:21:27.015661    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:27.015664    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:27.015667    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:27.015672    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:27.015747    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:27.015996    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:27.513549    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:27.513570    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:27.513582    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:27.513589    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:27.516173    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:27.516189    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:27.516197    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:27.516200    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:27.516204    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:27.516209    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:27 GMT
	I0805 16:21:27.516212    4640 round_trippers.go:580]     Audit-Id: a227585b-ae23-4bd1-b1dc-643eadd970cc
	I0805 16:21:27.516215    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:27.516416    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:28.014104    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:28.014132    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:28.014143    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:28.014159    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:28.016690    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:28.016705    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:28.016713    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:28.016717    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:28.016721    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:28.016725    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:28.016728    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:28 GMT
	I0805 16:21:28.016731    4640 round_trippers.go:580]     Audit-Id: 0d14831c-cc1f-41a9-a252-85e191b9594d
	I0805 16:21:28.016834    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:28.512703    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:28.512726    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:28.512739    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:28.512747    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:28.515176    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:28.515190    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:28.515197    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:28 GMT
	I0805 16:21:28.515201    4640 round_trippers.go:580]     Audit-Id: 6af459f8-bb08-43bf-ac7f-51ccacd5d664
	I0805 16:21:28.515206    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:28.515211    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:28.515215    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:28.515219    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:28.515378    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:29.013324    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:29.013354    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:29.013360    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:29.013364    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:29.014793    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:29.014804    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:29.014809    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:29.014813    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:29 GMT
	I0805 16:21:29.014817    4640 round_trippers.go:580]     Audit-Id: 2e50ff34-0c55-4136-b537-eee73f73706d
	I0805 16:21:29.014819    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:29.014822    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:29.014826    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:29.015098    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:29.513802    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:29.513832    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:29.513844    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:29.513852    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:29.516479    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:29.516496    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:29.516504    4640 round_trippers.go:580]     Audit-Id: bcbc3920-26b4-45f4-b91a-ce0e3dc11770
	I0805 16:21:29.516529    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:29.516538    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:29.516544    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:29.516549    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:29.516554    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:29 GMT
	I0805 16:21:29.516682    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:29.516938    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:30.013325    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:30.013349    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:30.013436    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:30.013448    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:30.016209    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:30.016222    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:30.016228    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:30.016233    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:30 GMT
	I0805 16:21:30.016238    4640 round_trippers.go:580]     Audit-Id: fb0bd3e0-89c3-4c77-a27d-be315cab22b7
	I0805 16:21:30.016242    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:30.016277    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:30.016283    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:30.016477    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:30.514344    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:30.514386    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:30.514482    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:30.514494    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:30.518828    4640 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 16:21:30.518860    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:30.518870    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:30.518876    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:30.518882    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:30 GMT
	I0805 16:21:30.518888    4640 round_trippers.go:580]     Audit-Id: c1b08932-ee78-4dcb-a190-3a8b24421284
	I0805 16:21:30.518894    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:30.518899    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:30.519002    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:31.012673    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:31.012701    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:31.012712    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:31.012718    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:31.015543    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:31.015560    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:31.015568    4640 round_trippers.go:580]     Audit-Id: b6586a64-ec07-44ee-8a00-1f3b8a00e0bd
	I0805 16:21:31.015572    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:31.015576    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:31.015580    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:31.015583    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:31.015589    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:31 GMT
	I0805 16:21:31.015682    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:31.512531    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:31.512543    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:31.512550    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:31.512554    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:31.514066    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:31.514076    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:31.514081    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:31 GMT
	I0805 16:21:31.514085    4640 round_trippers.go:580]     Audit-Id: 7d410de7-b0d5-4d4e-8455-d31b0df7d302
	I0805 16:21:31.514089    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:31.514093    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:31.514096    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:31.514107    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:31.514758    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:32.014110    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:32.014136    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:32.014147    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:32.014157    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:32.016553    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:32.016570    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:32.016580    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:32.016586    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:32.016592    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:32.016598    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:32.016602    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:32 GMT
	I0805 16:21:32.016605    4640 round_trippers.go:580]     Audit-Id: 67fdb64b-273a-46c2-aac5-c3b115422aa4
	I0805 16:21:32.016861    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:32.017132    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:32.513171    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:32.513188    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:32.513195    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:32.513198    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:32.514908    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:32.514920    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:32.514925    4640 round_trippers.go:580]     Audit-Id: 0f5a2e98-6be6-4963-8897-91c70642048c
	I0805 16:21:32.514928    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:32.514931    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:32.514933    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:32.514936    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:32.514939    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:32 GMT
	I0805 16:21:32.515082    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:33.013769    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:33.013803    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:33.013814    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:33.013822    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:33.016491    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:33.016509    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:33.016519    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:33 GMT
	I0805 16:21:33.016526    4640 round_trippers.go:580]     Audit-Id: 96b5f269-7be9-42a9-9687-cba57d05f76e
	I0805 16:21:33.016532    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:33.016538    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:33.016543    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:33.016548    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:33.016715    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:33.512751    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:33.512772    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:33.512783    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:33.512789    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:33.515431    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:33.515480    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:33.515498    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:33.515506    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:33.515510    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:33 GMT
	I0805 16:21:33.515513    4640 round_trippers.go:580]     Audit-Id: 6cd252a3-d07d-441e-bcf4-bc3bd00c2488
	I0805 16:21:33.515517    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:33.515520    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:33.515747    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:34.013003    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:34.013032    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:34.013043    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:34.013052    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:34.015447    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:34.015465    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:34.015472    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:34.015476    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:34.015479    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:34.015484    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:34.015487    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:34 GMT
	I0805 16:21:34.015492    4640 round_trippers.go:580]     Audit-Id: efcfb0d1-8345-4db5-bce9-e31085842da3
	I0805 16:21:34.015599    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:34.513298    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:34.513317    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:34.513376    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:34.513383    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:34.515051    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:34.515065    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:34.515072    4640 round_trippers.go:580]     Audit-Id: 2a42cb6a-0051-47bd-85f4-9f8ca80afa70
	I0805 16:21:34.515078    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:34.515081    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:34.515087    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:34.515099    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:34.515103    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:34 GMT
	I0805 16:21:34.515359    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:34.515540    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:35.013932    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:35.013957    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:35.013968    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:35.013976    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:35.016505    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:35.016524    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:35.016530    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:35.016537    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:35.016541    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:35.016544    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:35.016555    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:35 GMT
	I0805 16:21:35.016559    4640 round_trippers.go:580]     Audit-Id: 09fa0e04-c026-439e-9cd7-392fd82b16fe
	I0805 16:21:35.016913    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:35.513491    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:35.513514    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:35.513526    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:35.513532    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:35.515995    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:35.516012    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:35.516020    4640 round_trippers.go:580]     Audit-Id: a2b05a8a-9a91-4d20-93d0-b8701ac59b95
	I0805 16:21:35.516024    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:35.516036    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:35.516041    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:35.516055    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:35.516060    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:35 GMT
	I0805 16:21:35.516151    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:36.013521    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:36.013549    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.013561    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.013566    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.016095    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:36.016112    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.016119    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.016131    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.016136    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.016140    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.016144    4640 round_trippers.go:580]     Audit-Id: 77e04f39-a037-4ea2-9716-ad04139089d1
	I0805 16:21:36.016147    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.016230    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"423","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0805 16:21:36.016465    4640 node_ready.go:49] node "multinode-985000" has status "Ready":"True"
	I0805 16:21:36.016481    4640 node_ready.go:38] duration metric: took 15.504115701s for node "multinode-985000" to be "Ready" ...
	I0805 16:21:36.016489    4640 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:21:36.016543    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:36.016551    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.016559    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.016563    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.019046    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:36.019057    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.019065    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.019069    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.019078    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.019081    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.019084    4640 round_trippers.go:580]     Audit-Id: 96048303-6e62-4ba8-a291-bc1ad976756e
	I0805 16:21:36.019091    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.019721    4640 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"427","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56289 chars]
	I0805 16:21:36.021921    4640 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:36.021960    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:21:36.021964    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.021970    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.021974    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.023179    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:36.023187    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.023192    4640 round_trippers.go:580]     Audit-Id: ba42f387-f106-4773-86de-3a22085fd86a
	I0805 16:21:36.023195    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.023198    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.023200    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.023204    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.023208    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.023410    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"427","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0805 16:21:36.023652    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:36.023659    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.023665    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.023671    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.024732    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:36.024744    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.024752    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.024758    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.024765    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.024768    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.024771    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.024775    4640 round_trippers.go:580]     Audit-Id: 2008721c-b230-4e73-b037-d3a843d7c7c8
	I0805 16:21:36.024909    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"423","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0805 16:21:36.523495    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:21:36.523508    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.523514    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.523519    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.525003    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:36.525014    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.525020    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.525042    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.525049    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.525053    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.525060    4640 round_trippers.go:580]     Audit-Id: 1ad5a8dd-64b3-4881-9a8e-e5eaab368c53
	I0805 16:21:36.525066    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.525202    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"427","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0805 16:21:36.525483    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:36.525490    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.525498    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.525502    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.526801    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:36.526810    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.526814    4640 round_trippers.go:580]     Audit-Id: 71c4017f-a267-489e-86ed-59098eae3b88
	I0805 16:21:36.526817    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.526834    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.526840    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.526846    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.526850    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.527025    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"423","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0805 16:21:37.022759    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:21:37.022781    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.022791    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.022799    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.025487    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:37.025503    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.025510    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.025515    4640 round_trippers.go:580]     Audit-Id: 7446d9fd-22ed-4d20-b0f2-e8c4a88b04f4
	I0805 16:21:37.025536    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.025543    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.025547    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.025556    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.025649    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"427","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0805 16:21:37.026010    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.026020    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.026028    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.026033    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.027337    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:37.027346    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.027354    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.027359    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.027363    4640 round_trippers.go:580]     Audit-Id: a309eed4-f088-47f7-8b84-4761b59dbb8c
	I0805 16:21:37.027366    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.027368    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.027371    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.027425    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.522283    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:21:37.522304    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.522315    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.522322    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.524762    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:37.524776    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.524782    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.524788    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.524792    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.524795    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.524799    4640 round_trippers.go:580]     Audit-Id: eaef42a8-7b43-4091-9b70-8d31adc979e5
	I0805 16:21:37.524803    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.525073    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"443","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6576 chars]
	I0805 16:21:37.525438    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.525480    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.525488    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.525492    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.526890    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:37.526903    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.526912    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.526918    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.526927    4640 round_trippers.go:580]     Audit-Id: a3a0e71a-c982-4504-9fae-e76101688c05
	I0805 16:21:37.526931    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.526935    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.526937    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.527034    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.527211    4640 pod_ready.go:92] pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.527220    4640 pod_ready.go:81] duration metric: took 1.505289062s for pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.527230    4640 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.527259    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-985000
	I0805 16:21:37.527264    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.527269    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.527277    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.528379    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:37.528389    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.528394    4640 round_trippers.go:580]     Audit-Id: 3cf4f372-47fb-4b72-9b30-185d93d01537
	I0805 16:21:37.528401    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.528405    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.528408    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.528411    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.528414    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.528618    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-985000","namespace":"kube-system","uid":"8d7ca2d9-8c7b-41b9-a199-de6449107471","resourceVersion":"379","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.mirror":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.seen":"2024-08-05T23:21:06.366030299Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0805 16:21:37.528833    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.528840    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.528845    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.528850    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.529802    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.529808    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.529813    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.529816    4640 round_trippers.go:580]     Audit-Id: 314df0bd-894e-4607-bad0-3348c18fe807
	I0805 16:21:37.529820    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.529823    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.529826    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.529833    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.530046    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.530203    4640 pod_ready.go:92] pod "etcd-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.530210    4640 pod_ready.go:81] duration metric: took 2.974841ms for pod "etcd-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.530218    4640 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.530249    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-985000
	I0805 16:21:37.530253    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.530259    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.530262    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.531449    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:37.531456    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.531461    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.531463    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.531467    4640 round_trippers.go:580]     Audit-Id: 1801a8f0-22d5-44e8-942c-ea521b1ffa66
	I0805 16:21:37.531469    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.531475    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.531477    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.531592    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-985000","namespace":"kube-system","uid":"9be3378a-5fab-4907-baad-507918e714e4","resourceVersion":"369","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"5908531d711118eab279d6b15448dc42","kubernetes.io/config.mirror":"5908531d711118eab279d6b15448dc42","kubernetes.io/config.seen":"2024-08-05T23:21:06.366030949Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7684 chars]
	I0805 16:21:37.531810    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.531820    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.531825    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.531830    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.532663    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.532668    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.532672    4640 round_trippers.go:580]     Audit-Id: 6d0fc4ed-c609-4ee7-a57f-b61eed1bc442
	I0805 16:21:37.532675    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.532679    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.532682    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.532684    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.532688    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.532807    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.532958    4640 pod_ready.go:92] pod "kube-apiserver-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.532967    4640 pod_ready.go:81] duration metric: took 2.743443ms for pod "kube-apiserver-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.532973    4640 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.533000    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-985000
	I0805 16:21:37.533004    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.533009    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.533012    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.533987    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.533995    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.534000    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.534004    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.534020    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.534027    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.534031    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.534034    4640 round_trippers.go:580]     Audit-Id: 97e4dc5c-f4bf-419e-8b15-be800418054c
	I0805 16:21:37.534147    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-985000","namespace":"kube-system","uid":"4ad64361-65de-4b0b-b2a3-07df18c2e603","resourceVersion":"342","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e41fb21b40cd2f3bd83b000891f6569","kubernetes.io/config.mirror":"8e41fb21b40cd2f3bd83b000891f6569","kubernetes.io/config.seen":"2024-08-05T23:21:06.366027130Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7259 chars]
	I0805 16:21:37.534370    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.534377    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.534383    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.534386    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.535293    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.535301    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.535305    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.535308    4640 round_trippers.go:580]     Audit-Id: a4c04a0a-9401-41d1-a0fc-f2a2187abde4
	I0805 16:21:37.535310    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.535313    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.535320    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.535323    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.535432    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.535591    4640 pod_ready.go:92] pod "kube-controller-manager-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.535599    4640 pod_ready.go:81] duration metric: took 2.621545ms for pod "kube-controller-manager-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.535606    4640 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fwgw7" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.535629    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwgw7
	I0805 16:21:37.535634    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.535639    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.535643    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.536550    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.536557    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.536565    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.536570    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.536575    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.536578    4640 round_trippers.go:580]     Audit-Id: 5a688e80-7db3-4070-a1a8-c3419ddb4d44
	I0805 16:21:37.536580    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.536582    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.536704    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fwgw7","generateName":"kube-proxy-","namespace":"kube-system","uid":"3fb72e39-699d-4123-ae5e-e314a191d904","resourceVersion":"409","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8b6258e6-7b31-4600-b32b-4a269867c123","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8b6258e6-7b31-4600-b32b-4a269867c123\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0805 16:21:37.614745    4640 request.go:629] Waited for 77.807971ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.614815    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.614822    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.614839    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.614845    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.616956    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:37.616984    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.616989    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.616993    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.616996    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.616999    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.617002    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.617005    4640 round_trippers.go:580]     Audit-Id: e297627c-4c52-417b-935c-d406bf086c16
	I0805 16:21:37.617232    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.617428    4640 pod_ready.go:92] pod "kube-proxy-fwgw7" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.617437    4640 pod_ready.go:81] duration metric: took 81.82693ms for pod "kube-proxy-fwgw7" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.617444    4640 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.815296    4640 request.go:629] Waited for 197.761592ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-985000
	I0805 16:21:37.815347    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-985000
	I0805 16:21:37.815355    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.815366    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.815376    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.817961    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:37.817976    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.818001    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.818008    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:37.818049    4640 round_trippers.go:580]     Audit-Id: cc44c4e8-8012-4718-aa24-c05fec399a2e
	I0805 16:21:37.818064    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.818078    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.818082    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.818186    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-985000","namespace":"kube-system","uid":"5e23b1b7-e45d-4b43-831c-aa835c5e536d","resourceVersion":"396","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d110ae14602908970c81c0d8a5c21147","kubernetes.io/config.mirror":"d110ae14602908970c81c0d8a5c21147","kubernetes.io/config.seen":"2024-08-05T23:21:06.366029633Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0805 16:21:38.014472    4640 request.go:629] Waited for 195.947535ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:38.014569    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:38.014578    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.014589    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.014597    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.017395    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:38.017406    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.017413    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.017418    4640 round_trippers.go:580]     Audit-Id: 925efcbc-f43b-4431-905e-26927bb76a48
	I0805 16:21:38.017422    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.017428    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.017434    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.017441    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.017905    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:38.018153    4640 pod_ready.go:92] pod "kube-scheduler-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:38.018164    4640 pod_ready.go:81] duration metric: took 400.713995ms for pod "kube-scheduler-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:38.018173    4640 pod_ready.go:38] duration metric: took 2.001673669s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:21:38.018198    4640 api_server.go:52] waiting for apiserver process to appear ...
	I0805 16:21:38.018268    4640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:21:38.030133    4640 command_runner.go:130] > 1977
	I0805 16:21:38.030360    4640 api_server.go:72] duration metric: took 18.07694495s to wait for apiserver process to appear ...
	I0805 16:21:38.030369    4640 api_server.go:88] waiting for apiserver healthz status ...
	I0805 16:21:38.030384    4640 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:21:38.034009    4640 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0805 16:21:38.034048    4640 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I0805 16:21:38.034052    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.034058    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.034063    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.034646    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:38.034653    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.034658    4640 round_trippers.go:580]     Audit-Id: 9f5c9766-330c-4bb5-a5de-4c3a0fdbe474
	I0805 16:21:38.034662    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.034665    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.034668    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.034670    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.034673    4640 round_trippers.go:580]     Content-Length: 263
	I0805 16:21:38.034676    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.034687    4640 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0805 16:21:38.034733    4640 api_server.go:141] control plane version: v1.30.3
	I0805 16:21:38.034742    4640 api_server.go:131] duration metric: took 4.369143ms to wait for apiserver health ...
	I0805 16:21:38.034747    4640 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 16:21:38.213812    4640 request.go:629] Waited for 178.999213ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:38.213950    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:38.213960    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.213970    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.213980    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.217309    4640 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:21:38.217324    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.217331    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.217336    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.217363    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.217372    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.217377    4640 round_trippers.go:580]     Audit-Id: 0f21513f-44e7-4d2f-bacd-2a12fceef757
	I0805 16:21:38.217381    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.217979    4640 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"443","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0805 16:21:38.219249    4640 system_pods.go:59] 8 kube-system pods found
	I0805 16:21:38.219261    4640 system_pods.go:61] "coredns-7db6d8ff4d-fqtll" [4d8af129-475b-4185-8b0d-cbda67812964] Running
	I0805 16:21:38.219265    4640 system_pods.go:61] "etcd-multinode-985000" [8d7ca2d9-8c7b-41b9-a199-de6449107471] Running
	I0805 16:21:38.219268    4640 system_pods.go:61] "kindnet-tvtvg" [7dd4afe7-2a17-4298-823b-9955e43cfdb2] Running
	I0805 16:21:38.219271    4640 system_pods.go:61] "kube-apiserver-multinode-985000" [9be3378a-5fab-4907-baad-507918e714e4] Running
	I0805 16:21:38.219276    4640 system_pods.go:61] "kube-controller-manager-multinode-985000" [4ad64361-65de-4b0b-b2a3-07df18c2e603] Running
	I0805 16:21:38.219278    4640 system_pods.go:61] "kube-proxy-fwgw7" [3fb72e39-699d-4123-ae5e-e314a191d904] Running
	I0805 16:21:38.219280    4640 system_pods.go:61] "kube-scheduler-multinode-985000" [5e23b1b7-e45d-4b43-831c-aa835c5e536d] Running
	I0805 16:21:38.219283    4640 system_pods.go:61] "storage-provisioner" [72ec8458-5c62-43eb-9120-0146e6ccaf8f] Running
	I0805 16:21:38.219286    4640 system_pods.go:74] duration metric: took 184.535842ms to wait for pod list to return data ...
	I0805 16:21:38.219291    4640 default_sa.go:34] waiting for default service account to be created ...
	I0805 16:21:38.413643    4640 request.go:629] Waited for 194.308242ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:21:38.413680    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:21:38.413687    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.413695    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.413699    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.415522    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:38.415531    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.415536    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.415539    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.415543    4640 round_trippers.go:580]     Content-Length: 261
	I0805 16:21:38.415546    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.415548    4640 round_trippers.go:580]     Audit-Id: efc85c0c-9bbc-4cb7-8c14-19ba2f873800
	I0805 16:21:38.415551    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.415553    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.415563    4640 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b0626468-f73b-4e9b-8270-658495d43f4a","resourceVersion":"337","creationTimestamp":"2024-08-05T23:21:19Z"}}]}
	I0805 16:21:38.415681    4640 default_sa.go:45] found service account: "default"
	I0805 16:21:38.415690    4640 default_sa.go:55] duration metric: took 196.394719ms for default service account to be created ...
	I0805 16:21:38.415697    4640 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 16:21:38.613742    4640 request.go:629] Waited for 198.012461ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:38.613858    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:38.613864    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.613870    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.613874    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.616077    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:38.616090    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.616099    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.616106    4640 round_trippers.go:580]     Audit-Id: 3f8a6f23-788b-41c4-8dee-6ff59c02c21d
	I0805 16:21:38.616112    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.616116    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.616126    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.616143    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.616489    4640 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"443","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0805 16:21:38.617747    4640 system_pods.go:86] 8 kube-system pods found
	I0805 16:21:38.617761    4640 system_pods.go:89] "coredns-7db6d8ff4d-fqtll" [4d8af129-475b-4185-8b0d-cbda67812964] Running
	I0805 16:21:38.617766    4640 system_pods.go:89] "etcd-multinode-985000" [8d7ca2d9-8c7b-41b9-a199-de6449107471] Running
	I0805 16:21:38.617770    4640 system_pods.go:89] "kindnet-tvtvg" [7dd4afe7-2a17-4298-823b-9955e43cfdb2] Running
	I0805 16:21:38.617773    4640 system_pods.go:89] "kube-apiserver-multinode-985000" [9be3378a-5fab-4907-baad-507918e714e4] Running
	I0805 16:21:38.617776    4640 system_pods.go:89] "kube-controller-manager-multinode-985000" [4ad64361-65de-4b0b-b2a3-07df18c2e603] Running
	I0805 16:21:38.617780    4640 system_pods.go:89] "kube-proxy-fwgw7" [3fb72e39-699d-4123-ae5e-e314a191d904] Running
	I0805 16:21:38.617784    4640 system_pods.go:89] "kube-scheduler-multinode-985000" [5e23b1b7-e45d-4b43-831c-aa835c5e536d] Running
	I0805 16:21:38.617787    4640 system_pods.go:89] "storage-provisioner" [72ec8458-5c62-43eb-9120-0146e6ccaf8f] Running
	I0805 16:21:38.617792    4640 system_pods.go:126] duration metric: took 202.090644ms to wait for k8s-apps to be running ...
	I0805 16:21:38.617801    4640 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 16:21:38.617848    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:21:38.629448    4640 system_svc.go:56] duration metric: took 11.643357ms WaitForService to wait for kubelet
	I0805 16:21:38.629463    4640 kubeadm.go:582] duration metric: took 18.676048708s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:21:38.629475    4640 node_conditions.go:102] verifying NodePressure condition ...
	I0805 16:21:38.814057    4640 request.go:629] Waited for 184.539621ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I0805 16:21:38.814182    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0805 16:21:38.814193    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.814205    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.814213    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.817076    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:38.817092    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.817099    4640 round_trippers.go:580]     Audit-Id: 83bb2c88-8ae3-45b7-a0f6-9d3f9fead5f2
	I0805 16:21:38.817103    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.817112    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.817116    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.817123    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.817128    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:39 GMT
	I0805 16:21:38.817200    4640 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5011 chars]
	I0805 16:21:38.817474    4640 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:21:38.817490    4640 node_conditions.go:123] node cpu capacity is 2
	I0805 16:21:38.817502    4640 node_conditions.go:105] duration metric: took 188.023135ms to run NodePressure ...
	I0805 16:21:38.817512    4640 start.go:241] waiting for startup goroutines ...
	I0805 16:21:38.817520    4640 start.go:246] waiting for cluster config update ...
	I0805 16:21:38.817530    4640 start.go:255] writing updated cluster config ...
	I0805 16:21:38.838343    4640 out.go:177] 
	I0805 16:21:38.859405    4640 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:21:38.859465    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:21:38.881260    4640 out.go:177] * Starting "multinode-985000-m02" worker node in "multinode-985000" cluster
	I0805 16:21:38.923226    4640 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:21:38.923254    4640 cache.go:56] Caching tarball of preloaded images
	I0805 16:21:38.923425    4640 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:21:38.923439    4640 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:21:38.923503    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:21:38.924257    4640 start.go:360] acquireMachinesLock for multinode-985000-m02: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:21:38.924355    4640 start.go:364] duration metric: took 78.775µs to acquireMachinesLock for "multinode-985000-m02"
	I0805 16:21:38.924379    4640 start.go:93] Provisioning new machine with config: &{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0805 16:21:38.924443    4640 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0805 16:21:38.946258    4640 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 16:21:38.946431    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:38.946482    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:38.956315    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52515
	I0805 16:21:38.956651    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:38.957008    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:38.957028    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:38.957245    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:38.957408    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:21:38.957527    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:38.957642    4640 start.go:159] libmachine.API.Create for "multinode-985000" (driver="hyperkit")
	I0805 16:21:38.957663    4640 client.go:168] LocalClient.Create starting
	I0805 16:21:38.957697    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem
	I0805 16:21:38.957735    4640 main.go:141] libmachine: Decoding PEM data...
	I0805 16:21:38.957747    4640 main.go:141] libmachine: Parsing certificate...
	I0805 16:21:38.957790    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem
	I0805 16:21:38.957819    4640 main.go:141] libmachine: Decoding PEM data...
	I0805 16:21:38.957833    4640 main.go:141] libmachine: Parsing certificate...
	I0805 16:21:38.957849    4640 main.go:141] libmachine: Running pre-create checks...
	I0805 16:21:38.957855    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .PreCreateCheck
	I0805 16:21:38.957933    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:38.957959    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetConfigRaw
	I0805 16:21:38.967700    4640 main.go:141] libmachine: Creating machine...
	I0805 16:21:38.967725    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .Create
	I0805 16:21:38.967957    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:38.968233    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | I0805 16:21:38.967940    4677 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:21:38.968338    4640 main.go:141] libmachine: (multinode-985000-m02) Downloading /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1122/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 16:21:39.171726    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | I0805 16:21:39.171650    4677 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa...
	I0805 16:21:39.251408    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | I0805 16:21:39.251327    4677 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/multinode-985000-m02.rawdisk...
	I0805 16:21:39.251421    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Writing magic tar header
	I0805 16:21:39.251439    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Writing SSH key tar header
	I0805 16:21:39.252021    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | I0805 16:21:39.251983    4677 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02 ...
	I0805 16:21:39.622286    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:39.622309    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid
	I0805 16:21:39.622382    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Using UUID ab5b9c9f-9e28-4bc2-8fcd-b98fce011173
	I0805 16:21:39.647304    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Generated MAC a6:1c:88:9c:44:3
	I0805 16:21:39.647324    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000
	I0805 16:21:39.647363    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0805 16:21:39.647396    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0805 16:21:39.647440    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/multinode-985000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage,/Users/j
enkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"}
	I0805 16:21:39.647475    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ab5b9c9f-9e28-4bc2-8fcd-b98fce011173 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/multinode-985000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/mult
inode-985000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"
	I0805 16:21:39.647493    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:21:39.650407    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: Pid is 4678
	I0805 16:21:39.650823    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 0
	I0805 16:21:39.650838    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:39.650909    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:39.651807    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:39.651870    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:39.651899    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:39.651984    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:39.652006    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:39.652022    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:39.652032    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:39.652039    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:39.652046    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:39.652082    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:39.652100    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:39.652113    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:39.652123    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:39.652143    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:39.657903    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:21:39.666018    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:21:39.666937    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:21:39.666963    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:21:39.666975    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:21:39.666990    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:21:40.050205    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:21:40.050221    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:21:40.165006    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:21:40.165028    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:21:40.165042    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:21:40.165049    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:21:40.165899    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:21:40.165911    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:21:41.653048    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 1
	I0805 16:21:41.653066    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:41.653144    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:41.653911    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:41.653968    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:41.653979    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:41.653992    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:41.653998    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:41.654006    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:41.654015    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:41.654030    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:41.654045    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:41.654053    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:41.654061    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:41.654070    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:41.654078    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:41.654093    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:43.655366    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 2
	I0805 16:21:43.655382    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:43.655471    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:43.656243    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:43.656291    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:43.656301    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:43.656319    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:43.656329    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:43.656351    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:43.656362    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:43.656369    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:43.656375    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:43.656391    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:43.656406    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:43.656416    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:43.656423    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:43.656437    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:45.657345    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 3
	I0805 16:21:45.657361    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:45.657459    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:45.658214    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:45.658269    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:45.658278    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:45.658286    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:45.658295    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:45.658310    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:45.658321    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:45.658329    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:45.658337    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:45.658349    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:45.658362    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:45.658370    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:45.658378    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:45.658387    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:45.751756    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:45 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:21:45.751812    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:45 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:21:45.751830    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:45 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:21:45.774801    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:45 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:21:47.659182    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 4
	I0805 16:21:47.659208    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:47.659291    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:47.660062    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:47.660112    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:47.660128    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:47.660137    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:47.660145    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:47.660153    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:47.660162    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:47.660178    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:47.660192    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:47.660204    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:47.660218    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:47.660230    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:47.660240    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:47.660260    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:49.662115    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 5
	I0805 16:21:49.662148    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:49.662310    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:49.663748    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:49.663812    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I0805 16:21:49.663831    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b00c}
	I0805 16:21:49.663846    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found match: a6:1c:88:9c:44:3
	I0805 16:21:49.663856    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | IP: 192.169.0.14
	I0805 16:21:49.663945    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetConfigRaw
	I0805 16:21:49.664855    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:49.665006    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:49.665127    4640 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 16:21:49.665139    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetState
	I0805 16:21:49.665271    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:49.665344    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:49.666326    4640 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 16:21:49.666337    4640 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 16:21:49.666342    4640 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 16:21:49.666348    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.666471    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:49.666603    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.666743    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.666869    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:49.667045    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:49.667279    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:49.667287    4640 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 16:21:49.724369    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:21:49.724382    4640 main.go:141] libmachine: Detecting the provisioner...
	I0805 16:21:49.724388    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.724522    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:49.724626    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.724719    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.724810    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:49.724938    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:49.725087    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:49.725094    4640 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 16:21:49.782403    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 16:21:49.782454    4640 main.go:141] libmachine: found compatible host: buildroot
	I0805 16:21:49.782460    4640 main.go:141] libmachine: Provisioning with buildroot...
	I0805 16:21:49.782466    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:21:49.782595    4640 buildroot.go:166] provisioning hostname "multinode-985000-m02"
	I0805 16:21:49.782606    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:21:49.782698    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.782797    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:49.782871    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.782964    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.783079    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:49.783204    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:49.783350    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:49.783359    4640 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-985000-m02 && echo "multinode-985000-m02" | sudo tee /etc/hostname
	I0805 16:21:49.854175    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-985000-m02
	
	I0805 16:21:49.854190    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.854319    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:49.854421    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.854492    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.854587    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:49.854712    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:49.854870    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:49.854882    4640 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-985000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-985000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-985000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:21:49.917814    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:21:49.917830    4640 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:21:49.917840    4640 buildroot.go:174] setting up certificates
	I0805 16:21:49.917846    4640 provision.go:84] configureAuth start
	I0805 16:21:49.917856    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:21:49.917985    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:21:49.918095    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.918192    4640 provision.go:143] copyHostCerts
	I0805 16:21:49.918223    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:21:49.918280    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:21:49.918285    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:21:49.918411    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:21:49.918617    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:21:49.918652    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:21:49.918658    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:21:49.918733    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:21:49.918888    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:21:49.918922    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:21:49.918927    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:21:49.918994    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:21:49.919145    4640 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.multinode-985000-m02 san=[127.0.0.1 192.169.0.14 localhost minikube multinode-985000-m02]
	I0805 16:21:50.072896    4640 provision.go:177] copyRemoteCerts
	I0805 16:21:50.072947    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:21:50.072962    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:50.073107    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:50.073199    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.073317    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:50.073426    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:21:50.108446    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:21:50.108519    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:21:50.128617    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:21:50.128684    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0805 16:21:50.148653    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:21:50.148720    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:21:50.168682    4640 provision.go:87] duration metric: took 250.828344ms to configureAuth
	I0805 16:21:50.168695    4640 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:21:50.168835    4640 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:21:50.168849    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:50.168993    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:50.169087    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:50.169175    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.169262    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.169345    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:50.169486    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:50.169621    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:50.169628    4640 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:21:50.228062    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:21:50.228074    4640 buildroot.go:70] root file system type: tmpfs
	I0805 16:21:50.228150    4640 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:21:50.228164    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:50.228293    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:50.228388    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.228480    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.228586    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:50.228755    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:50.228888    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:50.228934    4640 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.13"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:21:50.296901    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.13
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:21:50.296919    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:50.297064    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:50.297158    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.297250    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.297333    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:50.297475    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:50.297611    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:50.297624    4640 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:21:51.873922    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:21:51.873940    4640 main.go:141] libmachine: Checking connection to Docker...
	I0805 16:21:51.873964    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetURL
	I0805 16:21:51.874107    4640 main.go:141] libmachine: Docker is up and running!
	I0805 16:21:51.874115    4640 main.go:141] libmachine: Reticulating splines...
	I0805 16:21:51.874120    4640 client.go:171] duration metric: took 12.916447572s to LocalClient.Create
	I0805 16:21:51.874129    4640 start.go:167] duration metric: took 12.916485141s to libmachine.API.Create "multinode-985000"
	I0805 16:21:51.874135    4640 start.go:293] postStartSetup for "multinode-985000-m02" (driver="hyperkit")
	I0805 16:21:51.874142    4640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:21:51.874152    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:51.874292    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:21:51.874313    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:51.874416    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:51.874505    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:51.874583    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:51.874657    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:21:51.915394    4640 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:21:51.919538    4640 command_runner.go:130] > NAME=Buildroot
	I0805 16:21:51.919549    4640 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0805 16:21:51.919553    4640 command_runner.go:130] > ID=buildroot
	I0805 16:21:51.919557    4640 command_runner.go:130] > VERSION_ID=2023.02.9
	I0805 16:21:51.919560    4640 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0805 16:21:51.919635    4640 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:21:51.919645    4640 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:21:51.919746    4640 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:21:51.919897    4640 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:21:51.919903    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:21:51.920070    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:21:51.929531    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:21:51.959146    4640 start.go:296] duration metric: took 85.003807ms for postStartSetup
	I0805 16:21:51.959174    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetConfigRaw
	I0805 16:21:51.959830    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:21:51.959996    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:21:51.960355    4640 start.go:128] duration metric: took 13.03589336s to createHost
	I0805 16:21:51.960370    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:51.960461    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:51.960532    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:51.960607    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:51.960679    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:51.960792    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:51.960921    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:51.960928    4640 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 16:21:52.018527    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722900112.019707412
	
	I0805 16:21:52.018539    4640 fix.go:216] guest clock: 1722900112.019707412
	I0805 16:21:52.018544    4640 fix.go:229] Guest: 2024-08-05 16:21:52.019707412 -0700 PDT Remote: 2024-08-05 16:21:51.960363 -0700 PDT m=+79.692294773 (delta=59.344412ms)
	I0805 16:21:52.018555    4640 fix.go:200] guest clock delta is within tolerance: 59.344412ms
	I0805 16:21:52.018561    4640 start.go:83] releasing machines lock for "multinode-985000-m02", held for 13.094193048s
	I0805 16:21:52.018577    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:52.018703    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:21:52.040117    4640 out.go:177] * Found network options:
	I0805 16:21:52.084887    4640 out.go:177]   - NO_PROXY=192.169.0.13
	W0805 16:21:52.106885    4640 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:21:52.106945    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:52.107811    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:52.108153    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:52.108320    4640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:21:52.108371    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	W0805 16:21:52.108412    4640 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:21:52.108519    4640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 16:21:52.108545    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:52.108628    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:52.108772    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:52.108842    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:52.108951    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:52.109026    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:52.109176    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:52.109197    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:21:52.109323    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:21:52.141829    4640 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0805 16:21:52.141939    4640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:21:52.141993    4640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:21:52.191903    4640 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0805 16:21:52.192466    4640 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0805 16:21:52.192507    4640 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:21:52.192514    4640 start.go:495] detecting cgroup driver to use...
	I0805 16:21:52.192581    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:21:52.208225    4640 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0805 16:21:52.208528    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:21:52.217078    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:21:52.225489    4640 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:21:52.225534    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:21:52.233992    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:21:52.242465    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:21:52.250835    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:21:52.260065    4640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:21:52.268863    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:21:52.277242    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:21:52.285501    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:21:52.293845    4640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:21:52.301185    4640 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0805 16:21:52.301319    4640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:21:52.308881    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:21:52.403323    4640 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:21:52.423722    4640 start.go:495] detecting cgroup driver to use...
	I0805 16:21:52.423794    4640 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:21:52.442557    4640 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0805 16:21:52.443108    4640 command_runner.go:130] > [Unit]
	I0805 16:21:52.443119    4640 command_runner.go:130] > Description=Docker Application Container Engine
	I0805 16:21:52.443124    4640 command_runner.go:130] > Documentation=https://docs.docker.com
	I0805 16:21:52.443128    4640 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0805 16:21:52.443132    4640 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0805 16:21:52.443136    4640 command_runner.go:130] > StartLimitBurst=3
	I0805 16:21:52.443141    4640 command_runner.go:130] > StartLimitIntervalSec=60
	I0805 16:21:52.443147    4640 command_runner.go:130] > [Service]
	I0805 16:21:52.443151    4640 command_runner.go:130] > Type=notify
	I0805 16:21:52.443155    4640 command_runner.go:130] > Restart=on-failure
	I0805 16:21:52.443160    4640 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13
	I0805 16:21:52.443165    4640 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0805 16:21:52.443175    4640 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0805 16:21:52.443182    4640 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0805 16:21:52.443188    4640 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0805 16:21:52.443194    4640 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0805 16:21:52.443200    4640 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0805 16:21:52.443212    4640 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0805 16:21:52.443224    4640 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0805 16:21:52.443231    4640 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0805 16:21:52.443234    4640 command_runner.go:130] > ExecStart=
	I0805 16:21:52.443246    4640 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0805 16:21:52.443250    4640 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0805 16:21:52.443256    4640 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0805 16:21:52.443262    4640 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0805 16:21:52.443265    4640 command_runner.go:130] > LimitNOFILE=infinity
	I0805 16:21:52.443269    4640 command_runner.go:130] > LimitNPROC=infinity
	I0805 16:21:52.443272    4640 command_runner.go:130] > LimitCORE=infinity
	I0805 16:21:52.443277    4640 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0805 16:21:52.443282    4640 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0805 16:21:52.443285    4640 command_runner.go:130] > TasksMax=infinity
	I0805 16:21:52.443290    4640 command_runner.go:130] > TimeoutStartSec=0
	I0805 16:21:52.443296    4640 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0805 16:21:52.443299    4640 command_runner.go:130] > Delegate=yes
	I0805 16:21:52.443304    4640 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0805 16:21:52.443313    4640 command_runner.go:130] > KillMode=process
	I0805 16:21:52.443317    4640 command_runner.go:130] > [Install]
	I0805 16:21:52.443321    4640 command_runner.go:130] > WantedBy=multi-user.target
	I0805 16:21:52.443454    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:21:52.455112    4640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:21:52.472976    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:21:52.485648    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:21:52.496640    4640 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:21:52.520742    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:21:52.532843    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:21:52.547391    4640 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0805 16:21:52.547619    4640 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:21:52.550475    4640 command_runner.go:130] > /usr/bin/cri-dockerd
	I0805 16:21:52.550551    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:21:52.558821    4640 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:21:52.572801    4640 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:21:52.669948    4640 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:21:52.772017    4640 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:21:52.772038    4640 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:21:52.785587    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:21:52.887001    4640 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:22:53.782764    4640 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0805 16:22:53.782779    4640 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0805 16:22:53.782788    4640 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.895755367s)
	I0805 16:22:53.782849    4640 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0805 16:22:53.791796    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0805 16:22:53.791808    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578059613Z" level=info msg="Starting up"
	I0805 16:22:53.791820    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578746899Z" level=info msg="containerd not running, starting managed containerd"
	I0805 16:22:53.791833    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.579364099Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	I0805 16:22:53.791843    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.597194743Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0805 16:22:53.791853    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613422882Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0805 16:22:53.791865    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613448264Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0805 16:22:53.791875    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613527396Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0805 16:22:53.791884    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613540484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791897    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613598776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.791906    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613664323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791924    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613844698Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.791936    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613881896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791948    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613894727Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.791957    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613902000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791967    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614005875Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791976    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614259691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791991    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615867073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.792000    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615974584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.792024    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616138996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.792033    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616172823Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0805 16:22:53.792042    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616291383Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0805 16:22:53.792050    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616398312Z" level=info msg="metadata content store policy set" policy=shared
	I0805 16:22:53.792059    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.618998610Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0805 16:22:53.792068    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619065338Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0805 16:22:53.792076    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619081703Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0805 16:22:53.792085    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619092273Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0805 16:22:53.792094    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619101426Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0805 16:22:53.792103    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619164798Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0805 16:22:53.792113    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619370752Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0805 16:22:53.792121    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619460644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0805 16:22:53.792129    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619495461Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0805 16:22:53.792138    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619506581Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0805 16:22:53.792148    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619515758Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792158    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619524383Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792170    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619532546Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792178    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619541391Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792187    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619550990Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792197    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619565508Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792266    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619576616Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792278    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619584035Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792291    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619598072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792299    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619608190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792307    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619616319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792316    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619625389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792326    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619634123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792335    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619648148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792344    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619658942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792353    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619667668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792362    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619676302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792371    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619686416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792380    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619694011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792388    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619701566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792397    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619709342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792406    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619719250Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0805 16:22:53.792415    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619733203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792423    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619741785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792432    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619749153Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0805 16:22:53.792442    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619797467Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0805 16:22:53.792454    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619811479Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0805 16:22:53.792467    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619819137Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0805 16:22:53.792661    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619826861Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0805 16:22:53.792673    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619833500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792682    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619841896Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0805 16:22:53.792690    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619852419Z" level=info msg="NRI interface is disabled by configuration."
	I0805 16:22:53.792702    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620071162Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0805 16:22:53.792710    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620124755Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0805 16:22:53.792718    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620155079Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0805 16:22:53.792725    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620168148Z" level=info msg="containerd successfully booted in 0.023750s"
	I0805 16:22:53.792734    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.639692405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0805 16:22:53.792741    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.644102102Z" level=info msg="Loading containers: start."
	I0805 16:22:53.792763    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.740540264Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0805 16:22:53.792774    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.826229634Z" level=info msg="Loading containers: done."
	I0805 16:22:53.792783    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843276878Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	I0805 16:22:53.792792    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843375843Z" level=info msg="Daemon has completed initialization"
	I0805 16:22:53.792800    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869275976Z" level=info msg="API listen on /var/run/docker.sock"
	I0805 16:22:53.792807    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869434474Z" level=info msg="API listen on [::]:2376"
	I0805 16:22:53.792813    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 systemd[1]: Started Docker Application Container Engine.
	I0805 16:22:53.792821    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.919662359Z" level=info msg="Processing signal 'terminated'"
	I0805 16:22:53.792829    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920773928Z" level=info msg="Daemon shutdown complete"
	I0805 16:22:53.792840    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920792538Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0805 16:22:53.792852    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920845272Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0805 16:22:53.792861    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920858866Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0805 16:22:53.792868    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0805 16:22:53.792874    4640 command_runner.go:130] > Aug 05 23:21:53 multinode-985000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0805 16:22:53.792904    4640 command_runner.go:130] > Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0805 16:22:53.792911    4640 command_runner.go:130] > Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0805 16:22:53.792918    4640 command_runner.go:130] > Aug 05 23:21:53 multinode-985000-m02 dockerd[923]: time="2024-08-05T23:21:53.957339969Z" level=info msg="Starting up"
	I0805 16:22:53.792929    4640 command_runner.go:130] > Aug 05 23:22:53 multinode-985000-m02 dockerd[923]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0805 16:22:53.792940    4640 command_runner.go:130] > Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0805 16:22:53.792946    4640 command_runner.go:130] > Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0805 16:22:53.792952    4640 command_runner.go:130] > Aug 05 23:22:53 multinode-985000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0805 16:22:53.817223    4640 out.go:177] 
	W0805 16:22:53.838182    4640 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 05 23:21:50 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578059613Z" level=info msg="Starting up"
	Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578746899Z" level=info msg="containerd not running, starting managed containerd"
	Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.579364099Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.597194743Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613422882Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613448264Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613527396Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613540484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613598776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613664323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613844698Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613881896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613894727Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613902000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614005875Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614259691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615867073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615974584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616138996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616172823Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616291383Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616398312Z" level=info msg="metadata content store policy set" policy=shared
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.618998610Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619065338Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619081703Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619092273Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619101426Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619164798Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619370752Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619460644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619495461Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619506581Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619515758Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619524383Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619532546Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619541391Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619550990Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619565508Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619576616Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619584035Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619598072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619608190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619616319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619625389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619634123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619648148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619658942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619667668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619676302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619686416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619694011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619701566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619709342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619719250Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619733203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619741785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619749153Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619797467Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619811479Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619819137Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619826861Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619833500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619841896Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619852419Z" level=info msg="NRI interface is disabled by configuration."
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620071162Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620124755Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620155079Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620168148Z" level=info msg="containerd successfully booted in 0.023750s"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.639692405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.644102102Z" level=info msg="Loading containers: start."
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.740540264Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.826229634Z" level=info msg="Loading containers: done."
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843276878Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843375843Z" level=info msg="Daemon has completed initialization"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869275976Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869434474Z" level=info msg="API listen on [::]:2376"
	Aug 05 23:21:51 multinode-985000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.919662359Z" level=info msg="Processing signal 'terminated'"
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920773928Z" level=info msg="Daemon shutdown complete"
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920792538Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920845272Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920858866Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 05 23:21:52 multinode-985000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 05 23:21:53 multinode-985000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:21:53 multinode-985000-m02 dockerd[923]: time="2024-08-05T23:21:53.957339969Z" level=info msg="Starting up"
	Aug 05 23:22:53 multinode-985000-m02 dockerd[923]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 05 23:22:53 multinode-985000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0805 16:22:53.838301    4640 out.go:239] * 
	W0805 16:22:53.839537    4640 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:22:53.901092    4640 out.go:177] 
	
	
	==> Docker <==
	Aug 05 23:21:20 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:20.870913472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:21 multinode-985000 cri-dockerd[1167]: time="2024-08-05T23:21:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/65a1122097f0725228802a52e0fbb10f5c959d1073c96dd779e5d0d5bf1a190d/resolv.conf as [nameserver 192.169.0.1]"
	Aug 05 23:21:24 multinode-985000 cri-dockerd[1167]: time="2024-08-05T23:21:24Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20240730-75a5af0c: Status: Downloaded newer image for kindest/kindnetd:v20240730-75a5af0c"
	Aug 05 23:21:24 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:24.316893544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:21:24 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:24.316953620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:21:24 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:24.316967191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:24 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:24.317067894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.538104203Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.538165633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.538177572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.538240622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.545949341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.546006859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.546094356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.546213245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 cri-dockerd[1167]: time="2024-08-05T23:21:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2a8cd74365e92f179bb6ee1ce28c9364c192d2bf64c54e8b18c5339cfbdf5dcd/resolv.conf as [nameserver 192.169.0.1]"
	Aug 05 23:21:36 multinode-985000 cri-dockerd[1167]: time="2024-08-05T23:21:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/35b9ac42edc06af57c697463456d60a00f8d9d12849ef967af1e639bc238e3b3/resolv.conf as [nameserver 192.169.0.1]"
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.715025205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.715620680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.716022138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.717088853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.755323726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.755409641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.755418837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.764703174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                      CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c9365aec33892       cbb01a7bd410d                                                                              About a minute ago   Running             coredns                   0                   35b9ac42edc06       coredns-7db6d8ff4d-fqtll
	3d9fd612d0b14       6e38f40d628db                                                                              About a minute ago   Running             storage-provisioner       0                   2a8cd74365e92       storage-provisioner
	724e5cfab0a27       kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3   About a minute ago   Running             kindnet-cni               0                   65a1122097f07       kindnet-tvtvg
	d58ca48f9f8b2       55bb025d2cfa5                                                                              About a minute ago   Running             kube-proxy                0                   c91338eb0e138       kube-proxy-fwgw7
	792feba1a6f6b       3edc18e7b7672                                                                              About a minute ago   Running             kube-scheduler            0                   c86e04eb7823b       kube-scheduler-multinode-985000
	1fdd85b796ab3       3861cfcd7c04c                                                                              About a minute ago   Running             etcd                      0                   b58900db52990       etcd-multinode-985000
	d11865076c645       76932a3b37d7e                                                                              About a minute ago   Running             kube-controller-manager   0                   55a20063845e3       kube-controller-manager-multinode-985000
	608878b33f358       1f6d574d502f3                                                                              About a minute ago   Running             kube-apiserver            0                   569788c2699f1       kube-apiserver-multinode-985000
	
	
	==> coredns [c9365aec3389] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57821 - 19682 "HINFO IN 7732396596932693360.4385804994640298901. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.014623104s
	
	
	==> describe nodes <==
	Name:               multinode-985000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-985000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=multinode-985000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T16_21_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:21:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-985000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:22:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:21:36 +0000   Mon, 05 Aug 2024 23:21:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:21:36 +0000   Mon, 05 Aug 2024 23:21:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:21:36 +0000   Mon, 05 Aug 2024 23:21:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:21:36 +0000   Mon, 05 Aug 2024 23:21:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.13
	  Hostname:    multinode-985000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 43d0d80c8ac846e58ac4351481e2a76f
	  System UUID:                3ac6443b-0000-0000-898d-9b152fa64288
	  Boot ID:                    382df761-aca3-4a9d-bdce-655bf0444398
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-fqtll                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     95s
	  kube-system                 etcd-multinode-985000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         109s
	  kube-system                 kindnet-tvtvg                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      95s
	  kube-system                 kube-apiserver-multinode-985000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 kube-controller-manager-multinode-985000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 kube-proxy-fwgw7                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	  kube-system                 kube-scheduler-multinode-985000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         95s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 94s                  kube-proxy       
	  Normal  Starting                 114s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  114s (x8 over 114s)  kubelet          Node multinode-985000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s (x8 over 114s)  kubelet          Node multinode-985000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s (x7 over 114s)  kubelet          Node multinode-985000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  114s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 109s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  109s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  109s                 kubelet          Node multinode-985000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s                 kubelet          Node multinode-985000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s                 kubelet          Node multinode-985000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           96s                  node-controller  Node multinode-985000 event: Registered Node multinode-985000 in Controller
	  Normal  NodeReady                80s                  kubelet          Node multinode-985000 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.662271] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.261909] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.788416] systemd-fstab-generator[490]: Ignoring "noauto" option for root device
	[  +0.099076] systemd-fstab-generator[502]: Ignoring "noauto" option for root device
	[  +1.730104] systemd-fstab-generator[841]: Ignoring "noauto" option for root device
	[  +0.293514] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.050985] kauditd_printk_skb: 95 callbacks suppressed
	[  +0.056812] systemd-fstab-generator[892]: Ignoring "noauto" option for root device
	[  +0.126132] systemd-fstab-generator[906]: Ignoring "noauto" option for root device
	[  +2.458612] systemd-fstab-generator[1120]: Ignoring "noauto" option for root device
	[  +0.104830] systemd-fstab-generator[1132]: Ignoring "noauto" option for root device
	[  +0.110549] systemd-fstab-generator[1144]: Ignoring "noauto" option for root device
	[  +0.128910] systemd-fstab-generator[1159]: Ignoring "noauto" option for root device
	[  +3.841948] systemd-fstab-generator[1259]: Ignoring "noauto" option for root device
	[  +0.049995] kauditd_printk_skb: 180 callbacks suppressed
	[  +2.575866] systemd-fstab-generator[1508]: Ignoring "noauto" option for root device
	[  +3.513702] systemd-fstab-generator[1689]: Ignoring "noauto" option for root device
	[  +0.052965] kauditd_printk_skb: 70 callbacks suppressed
	[Aug 5 23:21] systemd-fstab-generator[2095]: Ignoring "noauto" option for root device
	[  +0.093506] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.997559] systemd-fstab-generator[2287]: Ignoring "noauto" option for root device
	[  +0.103967] kauditd_printk_skb: 12 callbacks suppressed
	[ +16.210215] kauditd_printk_skb: 60 callbacks suppressed
	
	
	==> etcd [1fdd85b796ab] <==
	{"level":"info","ts":"2024-08-05T23:21:02.18706Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2024-08-05T23:21:02.178764Z","caller":"etcdserver/server.go:744","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"e0290fa3161c5471","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-08-05T23:21:02.178866Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T23:21:02.190598Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T23:21:02.190621Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T23:21:02.179152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 switched to configuration voters=(16152458731666035825)"}
	{"level":"info","ts":"2024-08-05T23:21:02.190761Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","added-peer-id":"e0290fa3161c5471","added-peer-peer-urls":["https://192.169.0.13:2380"]}
	{"level":"info","ts":"2024-08-05T23:21:02.845352Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-05T23:21:02.84543Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-05T23:21:02.845462Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgPreVoteResp from e0290fa3161c5471 at term 1"}
	{"level":"info","ts":"2024-08-05T23:21:02.845512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became candidate at term 2"}
	{"level":"info","ts":"2024-08-05T23:21:02.845532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgVoteResp from e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-08-05T23:21:02.845548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became leader at term 2"}
	{"level":"info","ts":"2024-08-05T23:21:02.845562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e0290fa3161c5471 elected leader e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-08-05T23:21:02.849595Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.851787Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e0290fa3161c5471","local-member-attributes":"{Name:multinode-985000 ClientURLs:[https://192.169.0.13:2379]}","request-path":"/0/members/e0290fa3161c5471/attributes","cluster-id":"87b46e718846f146","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T23:21:02.852037Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:21:02.855611Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	{"level":"info","ts":"2024-08-05T23:21:02.856003Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.856059Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.85615Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.863221Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:21:02.86336Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T23:21:02.863406Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T23:21:02.864495Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 23:22:55 up 2 min,  0 users,  load average: 0.34, 0.25, 0.10
	Linux multinode-985000 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [724e5cfab0a2] <==
	I0805 23:21:24.588626       1 main.go:148] setting mtu 1500 for CNI 
	I0805 23:21:24.588675       1 main.go:178] kindnetd IP family: "ipv4"
	I0805 23:21:24.588699       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0805 23:21:24.988087       1 main.go:237] Error creating network policy controller: could not run nftables command: /dev/stdin:1:1-37: Error: Could not process rule: Operation not supported
	add table inet kube-network-policies
	^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	, skipping network policies
	I0805 23:21:34.994707       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:21:34.994750       1 main.go:299] handling current node
	I0805 23:21:44.994289       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:21:44.994329       1 main.go:299] handling current node
	I0805 23:21:54.989547       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:21:54.989683       1 main.go:299] handling current node
	I0805 23:22:04.996172       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:22:04.996234       1 main.go:299] handling current node
	I0805 23:22:14.993544       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:22:14.993818       1 main.go:299] handling current node
	I0805 23:22:24.988919       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:22:24.989032       1 main.go:299] handling current node
	I0805 23:22:34.997314       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:22:34.997450       1 main.go:299] handling current node
	I0805 23:22:44.997367       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:22:44.997495       1 main.go:299] handling current node
	I0805 23:22:54.988709       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:22:54.988749       1 main.go:299] handling current node
	
	
	==> kube-apiserver [608878b33f35] <==
	I0805 23:21:04.062849       1 controller.go:615] quota admission added evaluator for: namespaces
	I0805 23:21:04.063684       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0805 23:21:04.063770       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0805 23:21:04.064440       1 shared_informer.go:320] Caches are synced for configmaps
	I0805 23:21:04.096991       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0805 23:21:04.097032       1 aggregator.go:165] initial CRD sync complete...
	I0805 23:21:04.097038       1 autoregister_controller.go:141] Starting autoregister controller
	I0805 23:21:04.097041       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0805 23:21:04.097046       1 cache.go:39] Caches are synced for autoregister controller
	I0805 23:21:04.110976       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0805 23:21:04.964782       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0805 23:21:04.969492       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0805 23:21:04.969592       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0805 23:21:05.293407       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0805 23:21:05.318630       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0805 23:21:05.372930       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0805 23:21:05.377089       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.13]
	I0805 23:21:05.377814       1 controller.go:615] quota admission added evaluator for: endpoints
	I0805 23:21:05.381896       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0805 23:21:06.014220       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0805 23:21:06.529594       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0805 23:21:06.534785       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0805 23:21:06.541889       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0805 23:21:20.069451       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0805 23:21:20.168118       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [d11865076c64] <==
	I0805 23:21:19.416726       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0805 23:21:19.418953       1 shared_informer.go:320] Caches are synced for PVC protection
	I0805 23:21:19.419460       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0805 23:21:19.419464       1 shared_informer.go:320] Caches are synced for job
	I0805 23:21:19.421825       1 shared_informer.go:320] Caches are synced for ephemeral
	I0805 23:21:19.424712       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0805 23:21:19.437276       1 shared_informer.go:320] Caches are synced for HPA
	I0805 23:21:19.471485       1 shared_informer.go:320] Caches are synced for resource quota
	I0805 23:21:19.493007       1 shared_informer.go:320] Caches are synced for resource quota
	I0805 23:21:19.891021       1 shared_informer.go:320] Caches are synced for garbage collector
	I0805 23:21:19.917468       1 shared_informer.go:320] Caches are synced for garbage collector
	I0805 23:21:19.917792       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0805 23:21:20.414332       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="341.696199ms"
	I0805 23:21:20.435171       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="20.789887ms"
	I0805 23:21:20.453666       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.448745ms"
	I0805 23:21:20.454853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="1.144243ms"
	I0805 23:21:20.787054       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="47.481389ms"
	I0805 23:21:20.817469       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.368774ms"
	I0805 23:21:20.817550       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="43.975µs"
	I0805 23:21:35.878200       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="31.077µs"
	I0805 23:21:35.888778       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.967µs"
	I0805 23:21:37.680305       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="64.353µs"
	I0805 23:21:37.699191       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="7.51419ms"
	I0805 23:21:37.699276       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.856µs"
	I0805 23:21:39.419986       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [d58ca48f9f8b] <==
	I0805 23:21:21.029929       1 server_linux.go:69] "Using iptables proxy"
	I0805 23:21:21.072929       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.13"]
	I0805 23:21:21.105532       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 23:21:21.105552       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 23:21:21.105563       1 server_linux.go:165] "Using iptables Proxier"
	I0805 23:21:21.107493       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 23:21:21.107594       1 server.go:872] "Version info" version="v1.30.3"
	I0805 23:21:21.107602       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:21:21.108477       1 config.go:192] "Starting service config controller"
	I0805 23:21:21.108482       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 23:21:21.108492       1 config.go:101] "Starting endpoint slice config controller"
	I0805 23:21:21.108494       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 23:21:21.108784       1 config.go:319] "Starting node config controller"
	I0805 23:21:21.108789       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 23:21:21.209420       1 shared_informer.go:320] Caches are synced for node config
	I0805 23:21:21.209474       1 shared_informer.go:320] Caches are synced for service config
	I0805 23:21:21.209501       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [792feba1a6f6] <==
	E0805 23:21:04.024310       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:04.024229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0805 23:21:04.024017       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 23:21:04.024329       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 23:21:04.024047       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 23:21:04.024362       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0805 23:21:04.024118       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 23:21:04.024431       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0805 23:21:04.860871       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:04.861069       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0805 23:21:04.959895       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0805 23:21:04.959949       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0805 23:21:04.962444       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 23:21:04.962496       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0805 23:21:04.968410       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 23:21:04.968452       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 23:21:05.030527       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0805 23:21:05.030566       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 23:21:05.076451       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:05.076659       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0805 23:21:05.118159       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:05.118676       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0805 23:21:05.141945       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 23:21:05.142020       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0805 23:21:08.218627       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 05 23:21:20 multinode-985000 kubelet[2102]: I0805 23:21:20.367574    2102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7dd4afe7-2a17-4298-823b-9955e43cfdb2-xtables-lock\") pod \"kindnet-tvtvg\" (UID: \"7dd4afe7-2a17-4298-823b-9955e43cfdb2\") " pod="kube-system/kindnet-tvtvg"
	Aug 05 23:21:20 multinode-985000 kubelet[2102]: I0805 23:21:20.367687    2102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3fb72e39-699d-4123-ae5e-e314a191d904-kube-proxy\") pod \"kube-proxy-fwgw7\" (UID: \"3fb72e39-699d-4123-ae5e-e314a191d904\") " pod="kube-system/kube-proxy-fwgw7"
	Aug 05 23:21:20 multinode-985000 kubelet[2102]: I0805 23:21:20.367762    2102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3fb72e39-699d-4123-ae5e-e314a191d904-xtables-lock\") pod \"kube-proxy-fwgw7\" (UID: \"3fb72e39-699d-4123-ae5e-e314a191d904\") " pod="kube-system/kube-proxy-fwgw7"
	Aug 05 23:21:20 multinode-985000 kubelet[2102]: I0805 23:21:20.367831    2102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjxwx\" (UniqueName: \"kubernetes.io/projected/3fb72e39-699d-4123-ae5e-e314a191d904-kube-api-access-wjxwx\") pod \"kube-proxy-fwgw7\" (UID: \"3fb72e39-699d-4123-ae5e-e314a191d904\") " pod="kube-system/kube-proxy-fwgw7"
	Aug 05 23:21:20 multinode-985000 kubelet[2102]: I0805 23:21:20.367902    2102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hpmc\" (UniqueName: \"kubernetes.io/projected/7dd4afe7-2a17-4298-823b-9955e43cfdb2-kube-api-access-4hpmc\") pod \"kindnet-tvtvg\" (UID: \"7dd4afe7-2a17-4298-823b-9955e43cfdb2\") " pod="kube-system/kindnet-tvtvg"
	Aug 05 23:21:20 multinode-985000 kubelet[2102]: I0805 23:21:20.367949    2102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3fb72e39-699d-4123-ae5e-e314a191d904-lib-modules\") pod \"kube-proxy-fwgw7\" (UID: \"3fb72e39-699d-4123-ae5e-e314a191d904\") " pod="kube-system/kube-proxy-fwgw7"
	Aug 05 23:21:20 multinode-985000 kubelet[2102]: I0805 23:21:20.367989    2102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7dd4afe7-2a17-4298-823b-9955e43cfdb2-lib-modules\") pod \"kindnet-tvtvg\" (UID: \"7dd4afe7-2a17-4298-823b-9955e43cfdb2\") " pod="kube-system/kindnet-tvtvg"
	Aug 05 23:21:20 multinode-985000 kubelet[2102]: I0805 23:21:20.368026    2102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/7dd4afe7-2a17-4298-823b-9955e43cfdb2-cni-cfg\") pod \"kindnet-tvtvg\" (UID: \"7dd4afe7-2a17-4298-823b-9955e43cfdb2\") " pod="kube-system/kindnet-tvtvg"
	Aug 05 23:21:21 multinode-985000 kubelet[2102]: I0805 23:21:21.533757    2102 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fwgw7" podStartSLOduration=1.533746174 podStartE2EDuration="1.533746174s" podCreationTimestamp="2024-08-05 23:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 23:21:21.533613047 +0000 UTC m=+15.232313113" watchObservedRunningTime="2024-08-05 23:21:21.533746174 +0000 UTC m=+15.232446234"
	Aug 05 23:21:24 multinode-985000 kubelet[2102]: I0805 23:21:24.554679    2102 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-tvtvg" podStartSLOduration=1.357613844 podStartE2EDuration="4.55466524s" podCreationTimestamp="2024-08-05 23:21:20 +0000 UTC" firstStartedPulling="2024-08-05 23:21:21.067788948 +0000 UTC m=+14.766489004" lastFinishedPulling="2024-08-05 23:21:24.264840342 +0000 UTC m=+17.963540400" observedRunningTime="2024-08-05 23:21:24.554540499 +0000 UTC m=+18.253240560" watchObservedRunningTime="2024-08-05 23:21:24.55466524 +0000 UTC m=+18.253365302"
	Aug 05 23:21:35 multinode-985000 kubelet[2102]: I0805 23:21:35.861427    2102 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	Aug 05 23:21:35 multinode-985000 kubelet[2102]: I0805 23:21:35.877555    2102 topology_manager.go:215] "Topology Admit Handler" podUID="4d8af129-475b-4185-8b0d-cbda67812964" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fqtll"
	Aug 05 23:21:35 multinode-985000 kubelet[2102]: I0805 23:21:35.881930    2102 topology_manager.go:215] "Topology Admit Handler" podUID="72ec8458-5c62-43eb-9120-0146e6ccaf8f" podNamespace="kube-system" podName="storage-provisioner"
	Aug 05 23:21:36 multinode-985000 kubelet[2102]: I0805 23:21:36.075346    2102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr28z\" (UniqueName: \"kubernetes.io/projected/4d8af129-475b-4185-8b0d-cbda67812964-kube-api-access-vr28z\") pod \"coredns-7db6d8ff4d-fqtll\" (UID: \"4d8af129-475b-4185-8b0d-cbda67812964\") " pod="kube-system/coredns-7db6d8ff4d-fqtll"
	Aug 05 23:21:36 multinode-985000 kubelet[2102]: I0805 23:21:36.075511    2102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d8af129-475b-4185-8b0d-cbda67812964-config-volume\") pod \"coredns-7db6d8ff4d-fqtll\" (UID: \"4d8af129-475b-4185-8b0d-cbda67812964\") " pod="kube-system/coredns-7db6d8ff4d-fqtll"
	Aug 05 23:21:36 multinode-985000 kubelet[2102]: I0805 23:21:36.075647    2102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/72ec8458-5c62-43eb-9120-0146e6ccaf8f-tmp\") pod \"storage-provisioner\" (UID: \"72ec8458-5c62-43eb-9120-0146e6ccaf8f\") " pod="kube-system/storage-provisioner"
	Aug 05 23:21:36 multinode-985000 kubelet[2102]: I0805 23:21:36.075764    2102 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5twhm\" (UniqueName: \"kubernetes.io/projected/72ec8458-5c62-43eb-9120-0146e6ccaf8f-kube-api-access-5twhm\") pod \"storage-provisioner\" (UID: \"72ec8458-5c62-43eb-9120-0146e6ccaf8f\") " pod="kube-system/storage-provisioner"
	Aug 05 23:21:36 multinode-985000 kubelet[2102]: I0805 23:21:36.636822    2102 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a8cd74365e92f179bb6ee1ce28c9364c192d2bf64c54e8b18c5339cfbdf5dcd"
	Aug 05 23:21:36 multinode-985000 kubelet[2102]: I0805 23:21:36.659322    2102 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35b9ac42edc06af57c697463456d60a00f8d9d12849ef967af1e639bc238e3b3"
	Aug 05 23:21:37 multinode-985000 kubelet[2102]: I0805 23:21:37.680862    2102 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fqtll" podStartSLOduration=17.680846593 podStartE2EDuration="17.680846593s" podCreationTimestamp="2024-08-05 23:21:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 23:21:37.680757892 +0000 UTC m=+31.379457958" watchObservedRunningTime="2024-08-05 23:21:37.680846593 +0000 UTC m=+31.379546655"
	Aug 05 23:22:06 multinode-985000 kubelet[2102]: E0805 23:22:06.390750    2102 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:22:06 multinode-985000 kubelet[2102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:22:06 multinode-985000 kubelet[2102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:22:06 multinode-985000 kubelet[2102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:22:06 multinode-985000 kubelet[2102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [3d9fd612d0b1] <==
	I0805 23:21:36.824264       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0805 23:21:36.839328       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0805 23:21:36.841986       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0805 23:21:36.851899       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0805 23:21:36.852326       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-985000_20a8683f-3aa0-4f0f-a016-73ecb7148b29!
	I0805 23:21:36.851925       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1cf31f72-12b6-4b0c-b90e-6ea19cb3d50f", APIVersion:"v1", ResourceVersion:"436", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-985000_20a8683f-3aa0-4f0f-a016-73ecb7148b29 became leader
	I0805 23:21:36.952695       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-985000_20a8683f-3aa0-4f0f-a016-73ecb7148b29!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-985000 -n multinode-985000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-985000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/FreshStart2Nodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (144.48s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (689.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- rollout status deployment/busybox
E0805 16:26:19.195782    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0805 16:26:50.597829    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
E0805 16:29:53.653400    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
E0805 16:31:19.194894    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0805 16:31:50.597786    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-985000 -- rollout status deployment/busybox: exit status 1 (10m1.87271427s)

                                                
                                                
-- stdout --
	Waiting for deployment "busybox" rollout to finish: 0 of 2 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 1 of 2 updated replicas are available...

                                                
                                                
-- /stdout --
** stderr ** 
	error: deployment "busybox" exceeded its progress deadline

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:524: failed to resolve pod IPs: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- exec busybox-fc5497c4f-44k5g -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- exec busybox-fc5497c4f-ptd5b -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-985000 -- exec busybox-fc5497c4f-ptd5b -- nslookup kubernetes.io: exit status 1 (119.147798ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-ptd5b does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:538: Pod busybox-fc5497c4f-ptd5b could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- exec busybox-fc5497c4f-44k5g -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- exec busybox-fc5497c4f-ptd5b -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-985000 -- exec busybox-fc5497c4f-ptd5b -- nslookup kubernetes.default: exit status 1 (120.139394ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-ptd5b does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:548: Pod busybox-fc5497c4f-ptd5b could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- exec busybox-fc5497c4f-44k5g -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- exec busybox-fc5497c4f-ptd5b -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-985000 -- exec busybox-fc5497c4f-ptd5b -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (118.929827ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-ptd5b does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:556: Pod busybox-fc5497c4f-ptd5b could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000
helpers_test.go:244: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-985000 logs -n 25: (2.068005949s)
helpers_test.go:252: TestMultiNode/serial/DeployApp2Nodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p second-744000                                  | second-744000        | jenkins | v1.33.1 | 05 Aug 24 16:18 PDT | 05 Aug 24 16:18 PDT |
	| delete  | -p first-742000                                   | first-742000         | jenkins | v1.33.1 | 05 Aug 24 16:18 PDT | 05 Aug 24 16:18 PDT |
	| start   | -p mount-start-1-684000                           | mount-start-1-684000 | jenkins | v1.33.1 | 05 Aug 24 16:18 PDT |                     |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46464                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=hyperkit                                 |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-703000                           | mount-start-2-703000 | jenkins | v1.33.1 | 05 Aug 24 16:20 PDT | 05 Aug 24 16:20 PDT |
	| delete  | -p mount-start-1-684000                           | mount-start-1-684000 | jenkins | v1.33.1 | 05 Aug 24 16:20 PDT | 05 Aug 24 16:20 PDT |
	| start   | -p multinode-985000                               | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:20 PDT |                     |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=hyperkit                                 |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- apply -f                   | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:22 PDT | 05 Aug 24 16:22 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- rollout                    | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:22 PDT |                     |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:32 PDT | 05 Aug 24 16:32 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT |                     |
	|         | busybox-fc5497c4f-ptd5b --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT |                     |
	|         | busybox-fc5497c4f-ptd5b --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT |                     |
	|         | busybox-fc5497c4f-ptd5b -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 16:20:32
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 16:20:32.303800    4640 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:20:32.303980    4640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:20:32.303986    4640 out.go:304] Setting ErrFile to fd 2...
	I0805 16:20:32.303990    4640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:20:32.304163    4640 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:20:32.305609    4640 out.go:298] Setting JSON to false
	I0805 16:20:32.329307    4640 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3003,"bootTime":1722897029,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0805 16:20:32.329400    4640 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:20:32.351877    4640 out.go:177] * [multinode-985000] minikube v1.33.1 on Darwin 14.5
	I0805 16:20:32.392940    4640 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:20:32.393020    4640 notify.go:220] Checking for updates...
	I0805 16:20:32.435775    4640 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:20:32.456783    4640 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0805 16:20:32.477872    4640 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:20:32.499010    4640 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:20:32.519936    4640 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:20:32.541363    4640 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:20:32.571784    4640 out.go:177] * Using the hyperkit driver based on user configuration
	I0805 16:20:32.613992    4640 start.go:297] selected driver: hyperkit
	I0805 16:20:32.614020    4640 start.go:901] validating driver "hyperkit" against <nil>
	I0805 16:20:32.614042    4640 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:20:32.618322    4640 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:20:32.618456    4640 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19373-1122/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0805 16:20:32.627075    4640 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0805 16:20:32.631391    4640 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:20:32.631417    4640 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0805 16:20:32.631452    4640 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 16:20:32.631678    4640 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:20:32.631709    4640 cni.go:84] Creating CNI manager for ""
	I0805 16:20:32.631719    4640 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0805 16:20:32.631730    4640 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0805 16:20:32.631823    4640 start.go:340] cluster config:
	{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:20:32.631925    4640 iso.go:125] acquiring lock: {Name:mk71e8d40232ece83c91dc82184f03ab93aee56e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:20:32.673756    4640 out.go:177] * Starting "multinode-985000" primary control-plane node in "multinode-985000" cluster
	I0805 16:20:32.695001    4640 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:20:32.695088    4640 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0805 16:20:32.695107    4640 cache.go:56] Caching tarball of preloaded images
	I0805 16:20:32.695319    4640 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:20:32.695338    4640 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:20:32.695809    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:20:32.695848    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json: {Name:mk470c2e849a0c86ee251e86e74d9f6dfdb47dad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:32.696485    4640 start.go:360] acquireMachinesLock for multinode-985000: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:20:32.696593    4640 start.go:364] duration metric: took 88.666µs to acquireMachinesLock for "multinode-985000"
	I0805 16:20:32.696646    4640 start.go:93] Provisioning new machine with config: &{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:20:32.696745    4640 start.go:125] createHost starting for "" (driver="hyperkit")
	I0805 16:20:32.718059    4640 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 16:20:32.718351    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:20:32.718416    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:20:32.728195    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52477
	I0805 16:20:32.728547    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:20:32.728938    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:20:32.728948    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:20:32.729147    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:20:32.729251    4640 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:20:32.729369    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:32.729498    4640 start.go:159] libmachine.API.Create for "multinode-985000" (driver="hyperkit")
	I0805 16:20:32.729521    4640 client.go:168] LocalClient.Create starting
	I0805 16:20:32.729556    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem
	I0805 16:20:32.729608    4640 main.go:141] libmachine: Decoding PEM data...
	I0805 16:20:32.729625    4640 main.go:141] libmachine: Parsing certificate...
	I0805 16:20:32.729685    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem
	I0805 16:20:32.729724    4640 main.go:141] libmachine: Decoding PEM data...
	I0805 16:20:32.729737    4640 main.go:141] libmachine: Parsing certificate...
	I0805 16:20:32.729749    4640 main.go:141] libmachine: Running pre-create checks...
	I0805 16:20:32.729760    4640 main.go:141] libmachine: (multinode-985000) Calling .PreCreateCheck
	I0805 16:20:32.729840    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:32.729974    4640 main.go:141] libmachine: (multinode-985000) Calling .GetConfigRaw
	I0805 16:20:32.739224    4640 main.go:141] libmachine: Creating machine...
	I0805 16:20:32.739247    4640 main.go:141] libmachine: (multinode-985000) Calling .Create
	I0805 16:20:32.739475    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:32.739754    4640 main.go:141] libmachine: (multinode-985000) DBG | I0805 16:20:32.739457    4648 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:20:32.739852    4640 main.go:141] libmachine: (multinode-985000) Downloading /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1122/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 16:20:32.920622    4640 main.go:141] libmachine: (multinode-985000) DBG | I0805 16:20:32.920524    4648 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa...
	I0805 16:20:32.957084    4640 main.go:141] libmachine: (multinode-985000) DBG | I0805 16:20:32.957005    4648 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/multinode-985000.rawdisk...
	I0805 16:20:32.957123    4640 main.go:141] libmachine: (multinode-985000) DBG | Writing magic tar header
	I0805 16:20:32.957134    4640 main.go:141] libmachine: (multinode-985000) DBG | Writing SSH key tar header
	I0805 16:20:32.957531    4640 main.go:141] libmachine: (multinode-985000) DBG | I0805 16:20:32.957490    4648 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000 ...
	I0805 16:20:33.331110    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:33.331140    4640 main.go:141] libmachine: (multinode-985000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid
	I0805 16:20:33.331159    4640 main.go:141] libmachine: (multinode-985000) DBG | Using UUID 3ac698fc-f622-443b-898d-9b152fa64288
	I0805 16:20:33.442582    4640 main.go:141] libmachine: (multinode-985000) DBG | Generated MAC e2:6:14:d2:13:ae
	I0805 16:20:33.442603    4640 main.go:141] libmachine: (multinode-985000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000
	I0805 16:20:33.442636    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3ac698fc-f622-443b-898d-9b152fa64288", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0805 16:20:33.442669    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3ac698fc-f622-443b-898d-9b152fa64288", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0805 16:20:33.442719    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "3ac698fc-f622-443b-898d-9b152fa64288", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/multinode-985000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage,/Users/jenkins/minikube-integration/1937
3-1122/.minikube/machines/multinode-985000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"}
	I0805 16:20:33.442758    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 3ac698fc-f622-443b-898d-9b152fa64288 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/multinode-985000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"
	I0805 16:20:33.442774    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:20:33.445733    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: Pid is 4651
	I0805 16:20:33.446145    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 0
	I0805 16:20:33.446167    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:33.446227    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:33.447073    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:33.447135    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:33.447152    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:33.447186    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:33.447202    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:33.447214    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:33.447222    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:33.447229    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:33.447247    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:33.447269    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:33.447287    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:33.447304    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:33.447321    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:33.453446    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:20:33.506623    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:20:33.507268    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:20:33.507283    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:20:33.507290    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:20:33.507298    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:20:33.891346    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:20:33.891387    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:20:34.006163    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:20:34.006177    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:20:34.006189    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:20:34.006208    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:20:34.007050    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:20:34.007082    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:20:35.448624    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 1
	I0805 16:20:35.448640    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:35.448724    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:35.449516    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:35.449591    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:35.449607    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:35.449619    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:35.449625    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:35.449648    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:35.449664    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:35.449695    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:35.449711    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:35.449719    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:35.449725    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:35.449731    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:35.449738    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:37.449834    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 2
	I0805 16:20:37.449851    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:37.449867    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:37.450676    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:37.450690    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:37.450697    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:37.450707    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:37.450722    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:37.450733    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:37.450744    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:37.450754    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:37.450771    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:37.450784    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:37.450797    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:37.450809    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:37.450819    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:39.451161    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 3
	I0805 16:20:39.451179    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:39.451277    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:39.452025    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:39.452066    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:39.452089    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:39.452104    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:39.452124    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:39.452141    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:39.452154    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:39.452161    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:39.452167    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:39.452183    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:39.452195    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:39.452202    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:39.452211    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:39.592041    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:39 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:20:39.592070    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:39 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:20:39.592076    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:39 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:20:39.615760    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:39 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:20:41.452210    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 4
	I0805 16:20:41.452225    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:41.452325    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:41.453101    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:41.453153    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:41.453162    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:41.453169    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:41.453178    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:41.453187    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:41.453194    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:41.453200    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:41.453219    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:41.453231    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:41.453241    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:41.453250    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:41.453258    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:43.455148    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 5
	I0805 16:20:43.455166    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:43.455244    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:43.456059    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:43.456103    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:20:43.456115    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:20:43.456122    4640 main.go:141] libmachine: (multinode-985000) DBG | Found match: e2:6:14:d2:13:ae
	I0805 16:20:43.456127    4640 main.go:141] libmachine: (multinode-985000) DBG | IP: 192.169.0.13
	I0805 16:20:43.456181    4640 main.go:141] libmachine: (multinode-985000) Calling .GetConfigRaw
	I0805 16:20:43.456781    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:43.456879    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:43.456972    4640 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 16:20:43.456985    4640 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:20:43.457082    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:43.457144    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:43.457907    4640 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 16:20:43.457917    4640 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 16:20:43.457923    4640 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 16:20:43.457927    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:43.458023    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:43.458126    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:43.458255    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:43.458346    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:43.458472    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:43.458676    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:43.458683    4640 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 16:20:44.513424    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:20:44.513443    4640 main.go:141] libmachine: Detecting the provisioner...
	I0805 16:20:44.513452    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:44.513594    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:44.513694    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.513791    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.513876    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:44.513996    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:44.514158    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:44.514165    4640 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 16:20:44.573082    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 16:20:44.573142    4640 main.go:141] libmachine: found compatible host: buildroot
	I0805 16:20:44.573149    4640 main.go:141] libmachine: Provisioning with buildroot...
	I0805 16:20:44.573155    4640 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:20:44.573299    4640 buildroot.go:166] provisioning hostname "multinode-985000"
	I0805 16:20:44.573311    4640 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:20:44.573416    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:44.573499    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:44.573585    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.573680    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.573795    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:44.573922    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:44.574068    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:44.574076    4640 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-985000 && echo "multinode-985000" | sudo tee /etc/hostname
	I0805 16:20:44.637872    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-985000
	
	I0805 16:20:44.637892    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:44.638029    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:44.638132    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.638218    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.638297    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:44.638429    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:44.638562    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:44.638582    4640 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-985000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-985000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-985000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:20:44.698340    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:20:44.698360    4640 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:20:44.698377    4640 buildroot.go:174] setting up certificates
	I0805 16:20:44.698389    4640 provision.go:84] configureAuth start
	I0805 16:20:44.698397    4640 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:20:44.698544    4640 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:20:44.698658    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:44.698750    4640 provision.go:143] copyHostCerts
	I0805 16:20:44.698781    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:20:44.698850    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:20:44.698858    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:20:44.699001    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:20:44.699205    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:20:44.699246    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:20:44.699250    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:20:44.699341    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:20:44.699482    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:20:44.699528    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:20:44.699533    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:20:44.699615    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:20:44.699756    4640 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.multinode-985000 san=[127.0.0.1 192.169.0.13 localhost minikube multinode-985000]
	I0805 16:20:45.028860    4640 provision.go:177] copyRemoteCerts
	I0805 16:20:45.028920    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:20:45.028938    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:45.029080    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:45.029180    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.029338    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:45.029452    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:20:45.063652    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:20:45.063724    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:20:45.083743    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:20:45.083800    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0805 16:20:45.103791    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:20:45.103863    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:20:45.123716    4640 provision.go:87] duration metric: took 425.312704ms to configureAuth
	I0805 16:20:45.123731    4640 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:20:45.123881    4640 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:20:45.123894    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:45.124028    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:45.124115    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:45.124206    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.124285    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.124381    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:45.124503    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:45.124632    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:45.124639    4640 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:20:45.176256    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:20:45.176269    4640 buildroot.go:70] root file system type: tmpfs
	I0805 16:20:45.176337    4640 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:20:45.176350    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:45.176482    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:45.176580    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.176695    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.176782    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:45.176911    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:45.177045    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:45.177090    4640 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:20:45.240992    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:20:45.241023    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:45.241166    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:45.241270    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.241382    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.241469    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:45.241590    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:45.241743    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:45.241755    4640 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:20:46.765402    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:20:46.765418    4640 main.go:141] libmachine: Checking connection to Docker...
	I0805 16:20:46.765424    4640 main.go:141] libmachine: (multinode-985000) Calling .GetURL
	I0805 16:20:46.765563    4640 main.go:141] libmachine: Docker is up and running!
	I0805 16:20:46.765570    4640 main.go:141] libmachine: Reticulating splines...
	I0805 16:20:46.765575    4640 client.go:171] duration metric: took 14.036043683s to LocalClient.Create
	I0805 16:20:46.765592    4640 start.go:167] duration metric: took 14.036090848s to libmachine.API.Create "multinode-985000"
	I0805 16:20:46.765602    4640 start.go:293] postStartSetup for "multinode-985000" (driver="hyperkit")
	I0805 16:20:46.765609    4640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:20:46.765620    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.765765    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:20:46.765778    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:46.765878    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:46.765972    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.766070    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:46.766168    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:20:46.808597    4640 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:20:46.814840    4640 command_runner.go:130] > NAME=Buildroot
	I0805 16:20:46.814852    4640 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0805 16:20:46.814856    4640 command_runner.go:130] > ID=buildroot
	I0805 16:20:46.814869    4640 command_runner.go:130] > VERSION_ID=2023.02.9
	I0805 16:20:46.814873    4640 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0805 16:20:46.814969    4640 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:20:46.814985    4640 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:20:46.815099    4640 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:20:46.815290    4640 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:20:46.815297    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:20:46.815526    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:20:46.832473    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:20:46.852626    4640 start.go:296] duration metric: took 87.015317ms for postStartSetup
	I0805 16:20:46.852653    4640 main.go:141] libmachine: (multinode-985000) Calling .GetConfigRaw
	I0805 16:20:46.853264    4640 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:20:46.853417    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:20:46.853762    4640 start.go:128] duration metric: took 14.156998155s to createHost
	I0805 16:20:46.853776    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:46.853870    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:46.853964    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.854078    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.854160    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:46.854284    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:46.854405    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:46.854413    4640 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 16:20:46.906137    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722900047.071906799
	
	I0805 16:20:46.906149    4640 fix.go:216] guest clock: 1722900047.071906799
	I0805 16:20:46.906154    4640 fix.go:229] Guest: 2024-08-05 16:20:47.071906799 -0700 PDT Remote: 2024-08-05 16:20:46.85377 -0700 PDT m=+14.585721958 (delta=218.136799ms)
	I0805 16:20:46.906178    4640 fix.go:200] guest clock delta is within tolerance: 218.136799ms
	I0805 16:20:46.906182    4640 start.go:83] releasing machines lock for "multinode-985000", held for 14.209573761s
	I0805 16:20:46.906200    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.906321    4640 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:20:46.906429    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.906734    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.906832    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.906917    4640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:20:46.906947    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:46.906977    4640 ssh_runner.go:195] Run: cat /version.json
	I0805 16:20:46.906987    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:46.907036    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:46.907080    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:46.907105    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.907167    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.907190    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:46.907251    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:46.907285    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:20:46.907353    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:20:46.936969    4640 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0805 16:20:46.937263    4640 ssh_runner.go:195] Run: systemctl --version
	I0805 16:20:46.992747    4640 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0805 16:20:46.993626    4640 command_runner.go:130] > systemd 252 (252)
	I0805 16:20:46.993660    4640 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0805 16:20:46.993799    4640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 16:20:46.998949    4640 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0805 16:20:46.998969    4640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:20:46.999002    4640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:20:47.012276    4640 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0805 16:20:47.012544    4640 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:20:47.012556    4640 start.go:495] detecting cgroup driver to use...
	I0805 16:20:47.012657    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:20:47.027593    4640 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0805 16:20:47.027660    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:20:47.035836    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:20:47.044911    4640 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:20:47.044968    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:20:47.053571    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:20:47.061858    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:20:47.070031    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:20:47.078524    4640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:20:47.087870    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:20:47.096303    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:20:47.104482    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:20:47.112756    4640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:20:47.120033    4640 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0805 16:20:47.120127    4640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:20:47.128644    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:47.220387    4640 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:20:47.239567    4640 start.go:495] detecting cgroup driver to use...
	I0805 16:20:47.239642    4640 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:20:47.254939    4640 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0805 16:20:47.255001    4640 command_runner.go:130] > [Unit]
	I0805 16:20:47.255011    4640 command_runner.go:130] > Description=Docker Application Container Engine
	I0805 16:20:47.255015    4640 command_runner.go:130] > Documentation=https://docs.docker.com
	I0805 16:20:47.255020    4640 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0805 16:20:47.255026    4640 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0805 16:20:47.255030    4640 command_runner.go:130] > StartLimitBurst=3
	I0805 16:20:47.255034    4640 command_runner.go:130] > StartLimitIntervalSec=60
	I0805 16:20:47.255037    4640 command_runner.go:130] > [Service]
	I0805 16:20:47.255041    4640 command_runner.go:130] > Type=notify
	I0805 16:20:47.255055    4640 command_runner.go:130] > Restart=on-failure
	I0805 16:20:47.255063    4640 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0805 16:20:47.255073    4640 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0805 16:20:47.255080    4640 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0805 16:20:47.255088    4640 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0805 16:20:47.255094    4640 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0805 16:20:47.255099    4640 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0805 16:20:47.255112    4640 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0805 16:20:47.255120    4640 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0805 16:20:47.255128    4640 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0805 16:20:47.255134    4640 command_runner.go:130] > ExecStart=
	I0805 16:20:47.255164    4640 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0805 16:20:47.255172    4640 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0805 16:20:47.255182    4640 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0805 16:20:47.255189    4640 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0805 16:20:47.255193    4640 command_runner.go:130] > LimitNOFILE=infinity
	I0805 16:20:47.255196    4640 command_runner.go:130] > LimitNPROC=infinity
	I0805 16:20:47.255200    4640 command_runner.go:130] > LimitCORE=infinity
	I0805 16:20:47.255205    4640 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0805 16:20:47.255209    4640 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0805 16:20:47.255212    4640 command_runner.go:130] > TasksMax=infinity
	I0805 16:20:47.255215    4640 command_runner.go:130] > TimeoutStartSec=0
	I0805 16:20:47.255220    4640 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0805 16:20:47.255225    4640 command_runner.go:130] > Delegate=yes
	I0805 16:20:47.255230    4640 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0805 16:20:47.255233    4640 command_runner.go:130] > KillMode=process
	I0805 16:20:47.255236    4640 command_runner.go:130] > [Install]
	I0805 16:20:47.255259    4640 command_runner.go:130] > WantedBy=multi-user.target
	I0805 16:20:47.255324    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:20:47.269909    4640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:20:47.286027    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:20:47.296365    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:20:47.306405    4640 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:20:47.369760    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:20:47.379998    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:20:47.394696    4640 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0805 16:20:47.394951    4640 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:20:47.397850    4640 command_runner.go:130] > /usr/bin/cri-dockerd
	I0805 16:20:47.398038    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:20:47.406063    4640 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:20:47.419537    4640 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:20:47.514227    4640 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:20:47.637079    4640 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:20:47.637156    4640 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:20:47.651314    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:47.748259    4640 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:20:50.076345    4640 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.32806615s)
	I0805 16:20:50.076407    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0805 16:20:50.086580    4640 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0805 16:20:50.099944    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:20:50.110410    4640 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0805 16:20:50.206329    4640 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0805 16:20:50.317239    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:50.417670    4640 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0805 16:20:50.431617    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:20:50.443305    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:50.555307    4640 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0805 16:20:50.610408    4640 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0805 16:20:50.610481    4640 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0805 16:20:50.614751    4640 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0805 16:20:50.614762    4640 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0805 16:20:50.614767    4640 command_runner.go:130] > Device: 0,22	Inode: 806         Links: 1
	I0805 16:20:50.614772    4640 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0805 16:20:50.614775    4640 command_runner.go:130] > Access: 2024-08-05 23:20:50.735793184 +0000
	I0805 16:20:50.614784    4640 command_runner.go:130] > Modify: 2024-08-05 23:20:50.735793184 +0000
	I0805 16:20:50.614789    4640 command_runner.go:130] > Change: 2024-08-05 23:20:50.736793062 +0000
	I0805 16:20:50.614792    4640 command_runner.go:130] >  Birth: -
	I0805 16:20:50.614829    4640 start.go:563] Will wait 60s for crictl version
	I0805 16:20:50.614890    4640 ssh_runner.go:195] Run: which crictl
	I0805 16:20:50.617807    4640 command_runner.go:130] > /usr/bin/crictl
	I0805 16:20:50.617933    4640 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 16:20:50.644026    4640 command_runner.go:130] > Version:  0.1.0
	I0805 16:20:50.644070    4640 command_runner.go:130] > RuntimeName:  docker
	I0805 16:20:50.644117    4640 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0805 16:20:50.644195    4640 command_runner.go:130] > RuntimeApiVersion:  v1
	I0805 16:20:50.645396    4640 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0805 16:20:50.645460    4640 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:20:50.661131    4640 command_runner.go:130] > 27.1.1
	I0805 16:20:50.662194    4640 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:20:50.677860    4640 command_runner.go:130] > 27.1.1
	I0805 16:20:50.700872    4640 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0805 16:20:50.700922    4640 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:20:50.701316    4640 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0805 16:20:50.706154    4640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:20:50.715610    4640 kubeadm.go:883] updating cluster {Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 16:20:50.715677    4640 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:20:50.715736    4640 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:20:50.733572    4640 docker.go:685] Got preloaded images: 
	I0805 16:20:50.733584    4640 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0805 16:20:50.733634    4640 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 16:20:50.741005    4640 command_runner.go:139] > {"Repositories":{}}
	I0805 16:20:50.741090    4640 ssh_runner.go:195] Run: which lz4
	I0805 16:20:50.744527    4640 command_runner.go:130] > /usr/bin/lz4
	I0805 16:20:50.744558    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0805 16:20:50.744692    4640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 16:20:50.747718    4640 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 16:20:50.747836    4640 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 16:20:50.747851    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0805 16:20:51.865752    4640 docker.go:649] duration metric: took 1.121114736s to copy over tarball
	I0805 16:20:51.865833    4640 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 16:20:54.241811    4640 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.375959074s)
	I0805 16:20:54.241825    4640 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 16:20:54.267125    4640 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 16:20:54.275283    4640 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.3":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.3":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.3":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d2
89d99da794784d1"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.3":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0805 16:20:54.275373    4640 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0805 16:20:54.288931    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:54.386395    4640 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:20:56.795159    4640 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.408741228s)
	I0805 16:20:56.795248    4640 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:20:56.808093    4640 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0805 16:20:56.808107    4640 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0805 16:20:56.808111    4640 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0805 16:20:56.808116    4640 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0805 16:20:56.808120    4640 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0805 16:20:56.808123    4640 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0805 16:20:56.808128    4640 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0805 16:20:56.808135    4640 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:20:56.809018    4640 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0805 16:20:56.809035    4640 cache_images.go:84] Images are preloaded, skipping loading
	I0805 16:20:56.809048    4640 kubeadm.go:934] updating node { 192.169.0.13 8443 v1.30.3 docker true true} ...
	I0805 16:20:56.809127    4640 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-985000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 16:20:56.809195    4640 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0805 16:20:56.847007    4640 command_runner.go:130] > cgroupfs
	I0805 16:20:56.847610    4640 cni.go:84] Creating CNI manager for ""
	I0805 16:20:56.847620    4640 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 16:20:56.847630    4640 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 16:20:56.847650    4640 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.13 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-985000 NodeName:multinode-985000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 16:20:56.847744    4640 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-985000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 16:20:56.847807    4640 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 16:20:56.855919    4640 command_runner.go:130] > kubeadm
	I0805 16:20:56.855931    4640 command_runner.go:130] > kubectl
	I0805 16:20:56.855934    4640 command_runner.go:130] > kubelet
	I0805 16:20:56.855959    4640 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 16:20:56.856010    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 16:20:56.863284    4640 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0805 16:20:56.876753    4640 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 16:20:56.890292    4640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0805 16:20:56.904628    4640 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I0805 16:20:56.907711    4640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:20:56.917108    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:57.013172    4640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:20:57.028650    4640 certs.go:68] Setting up /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000 for IP: 192.169.0.13
	I0805 16:20:57.028663    4640 certs.go:194] generating shared ca certs ...
	I0805 16:20:57.028674    4640 certs.go:226] acquiring lock for ca certs: {Name:mkb83e058d89c7d4e66f4136f377a3c305b13735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.028863    4640 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key
	I0805 16:20:57.028935    4640 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key
	I0805 16:20:57.028946    4640 certs.go:256] generating profile certs ...
	I0805 16:20:57.028995    4640 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key
	I0805 16:20:57.029007    4640 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt with IP's: []
	I0805 16:20:57.088127    4640 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt ...
	I0805 16:20:57.088142    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt: {Name:mkb7087fa165ae496621b10df42dfd2f8603360a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.088531    4640 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key ...
	I0805 16:20:57.088540    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key: {Name:mk37e627de9c39a2300d317d721ebf92a202a17e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.088775    4640 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec
	I0805 16:20:57.088790    4640 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt.5b7978ec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.13]
	I0805 16:20:57.189318    4640 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt.5b7978ec ...
	I0805 16:20:57.189336    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt.5b7978ec: {Name:mkb4501af4f6db766eb719de2f42fc564a23d2d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.189653    4640 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec ...
	I0805 16:20:57.189669    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec: {Name:mke641ddecfc5629bb592a5b6321d446ed3b31bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.189903    4640 certs.go:381] copying /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt.5b7978ec -> /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt
	I0805 16:20:57.190140    4640 certs.go:385] copying /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec -> /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key
	I0805 16:20:57.190318    4640 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key
	I0805 16:20:57.190336    4640 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt with IP's: []
	I0805 16:20:57.386717    4640 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt ...
	I0805 16:20:57.386733    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt: {Name:mk486344c8c5b8383e5349f68a995b553e8d31c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.387043    4640 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key ...
	I0805 16:20:57.387052    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key: {Name:mk2b24e1a5e962e12395adf21e4f6ad64901ee0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.387278    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 16:20:57.387306    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 16:20:57.387325    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 16:20:57.387349    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 16:20:57.387368    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 16:20:57.387391    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 16:20:57.387411    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 16:20:57.387432    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 16:20:57.387531    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem (1338 bytes)
	W0805 16:20:57.387583    4640 certs.go:480] ignoring /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678_empty.pem, impossibly tiny 0 bytes
	I0805 16:20:57.387591    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 16:20:57.387621    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem (1082 bytes)
	I0805 16:20:57.387656    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem (1123 bytes)
	I0805 16:20:57.387684    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem (1675 bytes)
	I0805 16:20:57.387747    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:20:57.387781    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem -> /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.387803    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.387822    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.388188    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 16:20:57.408800    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 16:20:57.429927    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 16:20:57.449924    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0805 16:20:57.470736    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0805 16:20:57.490564    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 16:20:57.511342    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 16:20:57.531190    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 16:20:57.551984    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem --> /usr/share/ca-certificates/1678.pem (1338 bytes)
	I0805 16:20:57.571601    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /usr/share/ca-certificates/16782.pem (1708 bytes)
	I0805 16:20:57.592369    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 16:20:57.611866    4640 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 16:20:57.626527    4640 ssh_runner.go:195] Run: openssl version
	I0805 16:20:57.630504    4640 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0805 16:20:57.630711    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1678.pem && ln -fs /usr/share/ca-certificates/1678.pem /etc/ssl/certs/1678.pem"
	I0805 16:20:57.638913    4640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.642115    4640 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  5 22:58 /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.642280    4640 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 22:58 /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.642315    4640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.646345    4640 command_runner.go:130] > 51391683
	I0805 16:20:57.646544    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1678.pem /etc/ssl/certs/51391683.0"
	I0805 16:20:57.654953    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16782.pem && ln -fs /usr/share/ca-certificates/16782.pem /etc/ssl/certs/16782.pem"
	I0805 16:20:57.663842    4640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.667242    4640 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  5 22:58 /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.667258    4640 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 22:58 /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.667300    4640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.671438    4640 command_runner.go:130] > 3ec20f2e
	I0805 16:20:57.671648    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16782.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 16:20:57.679692    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 16:20:57.688061    4640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.691411    4640 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.691493    4640 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.691531    4640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.695572    4640 command_runner.go:130] > b5213941
	I0805 16:20:57.695754    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 16:20:57.704703    4640 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 16:20:57.707752    4640 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 16:20:57.707872    4640 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 16:20:57.707921    4640 kubeadm.go:392] StartCluster: {Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:20:57.708054    4640 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 16:20:57.720408    4640 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 16:20:57.731114    4640 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0805 16:20:57.731128    4640 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0805 16:20:57.731133    4640 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0805 16:20:57.731194    4640 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 16:20:57.739645    4640 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 16:20:57.751095    4640 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0805 16:20:57.751108    4640 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0805 16:20:57.751113    4640 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0805 16:20:57.751120    4640 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 16:20:57.751266    4640 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 16:20:57.751273    4640 kubeadm.go:157] found existing configuration files:
	
	I0805 16:20:57.751324    4640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 16:20:57.759086    4640 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 16:20:57.759185    4640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 16:20:57.759233    4640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 16:20:57.769060    4640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 16:20:57.778103    4640 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 16:20:57.778143    4640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 16:20:57.778190    4640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 16:20:57.786612    4640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 16:20:57.794733    4640 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 16:20:57.794754    4640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 16:20:57.794796    4640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 16:20:57.802671    4640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 16:20:57.810242    4640 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 16:20:57.810264    4640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 16:20:57.810299    4640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 16:20:57.818339    4640 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 16:20:57.890449    4640 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0805 16:20:57.890461    4640 command_runner.go:130] > [init] Using Kubernetes version: v1.30.3
	I0805 16:20:57.890501    4640 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 16:20:57.890507    4640 command_runner.go:130] > [preflight] Running pre-flight checks
	I0805 16:20:57.984851    4640 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 16:20:57.984855    4640 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 16:20:57.984956    4640 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 16:20:57.984962    4640 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 16:20:57.985041    4640 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 16:20:57.985038    4640 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 16:20:58.152965    4640 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 16:20:58.152995    4640 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 16:20:58.175785    4640 out.go:204]   - Generating certificates and keys ...
	I0805 16:20:58.175840    4640 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0805 16:20:58.175851    4640 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 16:20:58.175914    4640 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0805 16:20:58.175920    4640 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 16:20:58.229002    4640 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0805 16:20:58.229016    4640 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0805 16:20:58.322701    4640 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0805 16:20:58.322717    4640 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0805 16:20:58.394063    4640 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0805 16:20:58.394077    4640 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0805 16:20:58.601975    4640 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0805 16:20:58.601995    4640 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0805 16:20:58.821056    4640 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0805 16:20:58.821065    4640 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0805 16:20:58.821204    4640 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-985000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0805 16:20:58.821214    4640 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-985000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0805 16:20:59.150811    4640 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0805 16:20:59.150817    4640 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0805 16:20:59.151036    4640 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-985000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0805 16:20:59.151046    4640 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-985000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0805 16:20:59.206073    4640 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0805 16:20:59.206088    4640 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0805 16:20:59.294956    4640 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0805 16:20:59.294966    4640 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0805 16:20:59.348591    4640 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0805 16:20:59.348602    4640 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0805 16:20:59.348788    4640 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 16:20:59.348797    4640 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 16:20:59.511379    4640 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 16:20:59.511395    4640 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 16:20:59.789652    4640 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 16:20:59.789666    4640 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 16:20:59.965508    4640 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 16:20:59.965517    4640 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 16:21:00.208268    4640 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 16:21:00.208284    4640 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 16:21:00.402575    4640 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 16:21:00.402582    4640 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 16:21:00.409122    4640 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 16:21:00.409137    4640 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 16:21:00.410639    4640 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 16:21:00.410652    4640 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 16:21:00.430944    4640 out.go:204]   - Booting up control plane ...
	I0805 16:21:00.431017    4640 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 16:21:00.431032    4640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 16:21:00.431106    4640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 16:21:00.431106    4640 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 16:21:00.431174    4640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 16:21:00.431182    4640 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 16:21:00.431274    4640 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 16:21:00.431286    4640 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 16:21:00.431361    4640 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 16:21:00.431369    4640 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 16:21:00.431399    4640 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 16:21:00.431405    4640 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0805 16:21:00.540991    4640 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 16:21:00.541004    4640 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 16:21:00.541076    4640 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 16:21:00.541081    4640 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 16:21:01.042556    4640 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.719164ms
	I0805 16:21:01.042573    4640 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.719164ms
	I0805 16:21:01.042632    4640 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 16:21:01.042639    4640 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 16:21:05.541995    4640 kubeadm.go:310] [api-check] The API server is healthy after 4.502407968s
	I0805 16:21:05.542014    4640 command_runner.go:130] > [api-check] The API server is healthy after 4.502407968s
	I0805 16:21:05.551474    4640 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 16:21:05.551486    4640 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 16:21:05.558278    4640 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 16:21:05.558284    4640 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 16:21:05.572116    4640 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 16:21:05.572130    4640 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0805 16:21:05.572281    4640 kubeadm.go:310] [mark-control-plane] Marking the node multinode-985000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 16:21:05.572292    4640 command_runner.go:130] > [mark-control-plane] Marking the node multinode-985000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 16:21:05.579214    4640 kubeadm.go:310] [bootstrap-token] Using token: 0mwls8.ribzsy6ooov2flu0
	I0805 16:21:05.579225    4640 command_runner.go:130] > [bootstrap-token] Using token: 0mwls8.ribzsy6ooov2flu0
	I0805 16:21:05.613851    4640 out.go:204]   - Configuring RBAC rules ...
	I0805 16:21:05.613974    4640 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 16:21:05.613988    4640 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 16:21:05.655317    4640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 16:21:05.655329    4640 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 16:21:05.659733    4640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 16:21:05.659737    4640 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 16:21:05.661608    4640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 16:21:05.661619    4640 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 16:21:05.663605    4640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 16:21:05.663612    4640 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 16:21:05.665771    4640 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 16:21:05.665778    4640 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 16:21:05.947572    4640 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 16:21:05.947585    4640 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 16:21:06.357765    4640 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 16:21:06.357776    4640 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0805 16:21:06.946930    4640 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 16:21:06.946942    4640 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0805 16:21:06.947937    4640 kubeadm.go:310] 
	I0805 16:21:06.947989    4640 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 16:21:06.947996    4640 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0805 16:21:06.948000    4640 kubeadm.go:310] 
	I0805 16:21:06.948071    4640 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 16:21:06.948080    4640 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0805 16:21:06.948088    4640 kubeadm.go:310] 
	I0805 16:21:06.948121    4640 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 16:21:06.948125    4640 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0805 16:21:06.948179    4640 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 16:21:06.948187    4640 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 16:21:06.948229    4640 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 16:21:06.948234    4640 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 16:21:06.948237    4640 kubeadm.go:310] 
	I0805 16:21:06.948284    4640 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 16:21:06.948302    4640 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0805 16:21:06.948309    4640 kubeadm.go:310] 
	I0805 16:21:06.948354    4640 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 16:21:06.948367    4640 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 16:21:06.948375    4640 kubeadm.go:310] 
	I0805 16:21:06.948414    4640 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 16:21:06.948418    4640 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0805 16:21:06.948479    4640 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 16:21:06.948488    4640 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 16:21:06.948558    4640 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 16:21:06.948564    4640 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 16:21:06.948570    4640 kubeadm.go:310] 
	I0805 16:21:06.948633    4640 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 16:21:06.948638    4640 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0805 16:21:06.948701    4640 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 16:21:06.948708    4640 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0805 16:21:06.948715    4640 kubeadm.go:310] 
	I0805 16:21:06.948788    4640 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0mwls8.ribzsy6ooov2flu0 \
	I0805 16:21:06.948795    4640 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 0mwls8.ribzsy6ooov2flu0 \
	I0805 16:21:06.948879    4640 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:524477c6809305b6c0c2d082a15767bdfc04953bf05f4ba28f6a5db30aba8adf \
	I0805 16:21:06.948886    4640 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:524477c6809305b6c0c2d082a15767bdfc04953bf05f4ba28f6a5db30aba8adf \
	I0805 16:21:06.948905    4640 kubeadm.go:310] 	--control-plane 
	I0805 16:21:06.948911    4640 command_runner.go:130] > 	--control-plane 
	I0805 16:21:06.948916    4640 kubeadm.go:310] 
	I0805 16:21:06.948980    4640 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 16:21:06.948984    4640 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0805 16:21:06.948987    4640 kubeadm.go:310] 
	I0805 16:21:06.949052    4640 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0mwls8.ribzsy6ooov2flu0 \
	I0805 16:21:06.949057    4640 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 0mwls8.ribzsy6ooov2flu0 \
	I0805 16:21:06.949136    4640 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:524477c6809305b6c0c2d082a15767bdfc04953bf05f4ba28f6a5db30aba8adf 
	I0805 16:21:06.949141    4640 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:524477c6809305b6c0c2d082a15767bdfc04953bf05f4ba28f6a5db30aba8adf 
	I0805 16:21:06.949613    4640 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 16:21:06.949621    4640 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 16:21:06.949644    4640 cni.go:84] Creating CNI manager for ""
	I0805 16:21:06.949649    4640 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 16:21:06.972147    4640 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0805 16:21:07.030449    4640 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0805 16:21:07.036220    4640 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0805 16:21:07.036233    4640 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0805 16:21:07.036239    4640 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0805 16:21:07.036249    4640 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0805 16:21:07.036254    4640 command_runner.go:130] > Access: 2024-08-05 23:20:43.694299549 +0000
	I0805 16:21:07.036259    4640 command_runner.go:130] > Modify: 2024-07-29 16:10:03.000000000 +0000
	I0805 16:21:07.036264    4640 command_runner.go:130] > Change: 2024-08-05 23:20:41.058596444 +0000
	I0805 16:21:07.036266    4640 command_runner.go:130] >  Birth: -
	I0805 16:21:07.036368    4640 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0805 16:21:07.036375    4640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0805 16:21:07.050414    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0805 16:21:07.243070    4640 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0805 16:21:07.246445    4640 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0805 16:21:07.250670    4640 command_runner.go:130] > serviceaccount/kindnet created
	I0805 16:21:07.255971    4640 command_runner.go:130] > daemonset.apps/kindnet created
	I0805 16:21:07.257424    4640 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 16:21:07.257500    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-985000 minikube.k8s.io/updated_at=2024_08_05T16_21_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4 minikube.k8s.io/name=multinode-985000 minikube.k8s.io/primary=true
	I0805 16:21:07.257502    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:07.266956    4640 command_runner.go:130] > -16
	I0805 16:21:07.267023    4640 ops.go:34] apiserver oom_adj: -16
	I0805 16:21:07.390396    4640 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0805 16:21:07.392070    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:07.400579    4640 command_runner.go:130] > node/multinode-985000 labeled
	I0805 16:21:07.456213    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:07.893323    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:07.956622    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:08.392391    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:08.450793    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:08.892411    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:08.950456    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:09.393238    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:09.450291    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:09.892156    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:09.951159    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:10.393019    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:10.451734    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:10.893100    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:10.954360    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:11.393009    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:11.452879    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:11.894187    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:11.953480    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:12.392194    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:12.452444    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:12.894265    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:12.955367    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:13.392882    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:13.455680    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:13.892568    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:13.950195    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:14.393254    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:14.452940    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:14.892187    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:14.948447    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:15.392762    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:15.451815    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:15.892531    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:15.952781    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:16.393008    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:16.454659    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:16.892423    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:16.957989    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:17.392489    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:17.452653    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:17.892453    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:17.953809    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:18.392692    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:18.450726    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:18.893940    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:18.957266    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:19.393402    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:19.452345    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:19.892761    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:19.952524    4640 command_runner.go:130] > NAME      SECRETS   AGE
	I0805 16:21:19.952537    4640 command_runner.go:130] > default   0         1s
	I0805 16:21:19.952551    4640 kubeadm.go:1113] duration metric: took 12.695106906s to wait for elevateKubeSystemPrivileges
	I0805 16:21:19.952568    4640 kubeadm.go:394] duration metric: took 22.244643678s to StartCluster
	I0805 16:21:19.952584    4640 settings.go:142] acquiring lock: {Name:mk564a817a54ecf2aef16a4d2309e85208c0231f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:21:19.952678    4640 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:21:19.953130    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/kubeconfig: {Name:mk2a0d8b4d330b3c26432fc65d015ddf98a9cc93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:21:19.953387    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0805 16:21:19.953391    4640 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:21:19.953437    4640 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 16:21:19.953474    4640 addons.go:69] Setting storage-provisioner=true in profile "multinode-985000"
	I0805 16:21:19.953501    4640 addons.go:234] Setting addon storage-provisioner=true in "multinode-985000"
	I0805 16:21:19.953507    4640 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:21:19.953501    4640 addons.go:69] Setting default-storageclass=true in profile "multinode-985000"
	I0805 16:21:19.953520    4640 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:21:19.953542    4640 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-985000"
	I0805 16:21:19.953772    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.953787    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.953870    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.953897    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.962985    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52500
	I0805 16:21:19.963341    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52502
	I0805 16:21:19.963365    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.963645    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.963722    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.963735    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.963997    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.964004    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.964027    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.964249    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.964372    4640 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:21:19.964430    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.964458    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:19.964465    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.964535    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:21:19.966651    4640 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:21:19.966874    4640 kapi.go:59] client config for multinode-985000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xed05060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:21:19.967275    4640 cert_rotation.go:137] Starting client certificate rotation controller
	I0805 16:21:19.967411    4640 addons.go:234] Setting addon default-storageclass=true in "multinode-985000"
	I0805 16:21:19.967434    4640 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:21:19.967665    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.967688    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.973226    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52504
	I0805 16:21:19.973568    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.973922    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.973942    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.974163    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.974282    4640 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:21:19.974363    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:19.974444    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:21:19.975405    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:21:19.975491    4640 out.go:177] * Verifying Kubernetes components...
	I0805 16:21:19.976182    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52506
	I0805 16:21:19.976461    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.976795    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.976812    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.976999    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.977392    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.977409    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.986027    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52508
	I0805 16:21:19.986361    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.986712    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.986741    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.986959    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.987071    4640 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:21:19.987149    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:19.987227    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:21:19.988179    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:21:19.988299    4640 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 16:21:19.988307    4640 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 16:21:19.988315    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:21:19.988395    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:21:19.988484    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:21:19.988568    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:21:19.988639    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:21:20.032241    4640 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:21:20.032361    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:21:20.069496    4640 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 16:21:20.069510    4640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 16:21:20.069530    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:21:20.069717    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:21:20.069824    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:21:20.069935    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:21:20.070041    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:21:20.084762    4640 command_runner.go:130] > apiVersion: v1
	I0805 16:21:20.084775    4640 command_runner.go:130] > data:
	I0805 16:21:20.084779    4640 command_runner.go:130] >   Corefile: |
	I0805 16:21:20.084782    4640 command_runner.go:130] >     .:53 {
	I0805 16:21:20.084785    4640 command_runner.go:130] >         errors
	I0805 16:21:20.084790    4640 command_runner.go:130] >         health {
	I0805 16:21:20.084794    4640 command_runner.go:130] >            lameduck 5s
	I0805 16:21:20.084796    4640 command_runner.go:130] >         }
	I0805 16:21:20.084812    4640 command_runner.go:130] >         ready
	I0805 16:21:20.084822    4640 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0805 16:21:20.084829    4640 command_runner.go:130] >            pods insecure
	I0805 16:21:20.084833    4640 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0805 16:21:20.084841    4640 command_runner.go:130] >            ttl 30
	I0805 16:21:20.084853    4640 command_runner.go:130] >         }
	I0805 16:21:20.084863    4640 command_runner.go:130] >         prometheus :9153
	I0805 16:21:20.084868    4640 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0805 16:21:20.084880    4640 command_runner.go:130] >            max_concurrent 1000
	I0805 16:21:20.084884    4640 command_runner.go:130] >         }
	I0805 16:21:20.084887    4640 command_runner.go:130] >         cache 30
	I0805 16:21:20.084898    4640 command_runner.go:130] >         loop
	I0805 16:21:20.084902    4640 command_runner.go:130] >         reload
	I0805 16:21:20.084905    4640 command_runner.go:130] >         loadbalance
	I0805 16:21:20.084908    4640 command_runner.go:130] >     }
	I0805 16:21:20.084911    4640 command_runner.go:130] > kind: ConfigMap
	I0805 16:21:20.084914    4640 command_runner.go:130] > metadata:
	I0805 16:21:20.084921    4640 command_runner.go:130] >   creationTimestamp: "2024-08-05T23:21:06Z"
	I0805 16:21:20.084926    4640 command_runner.go:130] >   name: coredns
	I0805 16:21:20.084929    4640 command_runner.go:130] >   namespace: kube-system
	I0805 16:21:20.084933    4640 command_runner.go:130] >   resourceVersion: "266"
	I0805 16:21:20.084937    4640 command_runner.go:130] >   uid: 5057af03-8824-4e67-a4b6-ef90c1ded7ce
	I0805 16:21:20.085056    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0805 16:21:20.184335    4640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:21:20.203408    4640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 16:21:20.278639    4640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 16:21:20.507141    4640 command_runner.go:130] > configmap/coredns replaced
	I0805 16:21:20.511660    4640 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0805 16:21:20.511929    4640 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:21:20.511932    4640 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:21:20.512124    4640 kapi.go:59] client config for multinode-985000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xed05060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:21:20.512125    4640 kapi.go:59] client config for multinode-985000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xed05060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:21:20.512341    4640 node_ready.go:35] waiting up to 6m0s for node "multinode-985000" to be "Ready" ...
	I0805 16:21:20.512409    4640 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0805 16:21:20.512416    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.512423    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.512424    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:20.512428    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.512430    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.512438    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.512446    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.520076    4640 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0805 16:21:20.520087    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.520092    4640 round_trippers.go:580]     Audit-Id: 304f14c4-a466-4fb6-b401-b28f4df4dfa1
	I0805 16:21:20.520095    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.520103    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.520107    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.520111    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.520113    4640 round_trippers.go:580]     Content-Length: 291
	I0805 16:21:20.520117    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.521443    4640 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0805 16:21:20.521456    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.521464    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.521474    4640 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7bdcac2f-ecae-4bb5-9dd4-4f2479d63a63","resourceVersion":"381","creationTimestamp":"2024-08-05T23:21:06Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0805 16:21:20.521479    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.521487    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.521502    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.521511    4640 round_trippers.go:580]     Audit-Id: bcd9e393-6b08-4ffb-a73b-6e7c430f0212
	I0805 16:21:20.521518    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.521831    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:20.521865    4640 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7bdcac2f-ecae-4bb5-9dd4-4f2479d63a63","resourceVersion":"381","creationTimestamp":"2024-08-05T23:21:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0805 16:21:20.521904    4640 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0805 16:21:20.521914    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.521921    4640 round_trippers.go:473]     Content-Type: application/json
	I0805 16:21:20.521930    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.521935    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.530726    4640 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0805 16:21:20.530739    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.530744    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.530748    4640 round_trippers.go:580]     Content-Length: 291
	I0805 16:21:20.530751    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.530754    4640 round_trippers.go:580]     Audit-Id: ba15a3b2-b69b-473e-a331-81e01385ad47
	I0805 16:21:20.530756    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.530758    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.530761    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.530773    4640 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7bdcac2f-ecae-4bb5-9dd4-4f2479d63a63","resourceVersion":"383","creationTimestamp":"2024-08-05T23:21:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0805 16:21:20.588534    4640 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0805 16:21:20.588563    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.588570    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.588737    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.588752    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.588765    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.588764    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.588772    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.588919    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.588920    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.588931    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.589012    4640 round_trippers.go:463] GET https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses
	I0805 16:21:20.589020    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.589028    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.589034    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.597496    4640 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0805 16:21:20.597508    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.597513    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.597518    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.597521    4640 round_trippers.go:580]     Content-Length: 1273
	I0805 16:21:20.597523    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.597525    4640 round_trippers.go:580]     Audit-Id: d7394cfc-1eb3-4623-8a7f-a5088a0398c8
	I0805 16:21:20.597527    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.597530    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.597844    4640 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"391"},"items":[{"metadata":{"name":"standard","uid":"34b9c98b-1b12-420a-8576-fd00c496f57b","resourceVersion":"387","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0805 16:21:20.598117    4640 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"34b9c98b-1b12-420a-8576-fd00c496f57b","resourceVersion":"387","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0805 16:21:20.598145    4640 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0805 16:21:20.598150    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.598157    4640 round_trippers.go:473]     Content-Type: application/json
	I0805 16:21:20.598166    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.598171    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.619819    4640 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0805 16:21:20.619836    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.619842    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.619846    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.619849    4640 round_trippers.go:580]     Content-Length: 1220
	I0805 16:21:20.619852    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.619855    4640 round_trippers.go:580]     Audit-Id: 299d4cc8-0cb5-4dd5-80b3-5d54592ecd90
	I0805 16:21:20.619859    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.619861    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.619898    4640 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"34b9c98b-1b12-420a-8576-fd00c496f57b","resourceVersion":"387","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0805 16:21:20.619983    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.619992    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.620141    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.620153    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.620166    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.750372    4640 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0805 16:21:20.753871    4640 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0805 16:21:20.759257    4640 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0805 16:21:20.767575    4640 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0805 16:21:20.774745    4640 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0805 16:21:20.786454    4640 command_runner.go:130] > pod/storage-provisioner created
	I0805 16:21:20.787838    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.787851    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.788087    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.788087    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.788098    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.788109    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.788117    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.788261    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.788280    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.788280    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.811467    4640 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0805 16:21:20.871433    4640 addons.go:510] duration metric: took 917.995637ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0805 16:21:21.014507    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:21.014532    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:21.014545    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:21.014553    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:21.014605    4640 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0805 16:21:21.014619    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:21.014631    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:21.014638    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:21.017465    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:21.017464    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:21.017480    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:21.017492    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:21.017492    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:21.017496    4640 round_trippers.go:580]     Content-Length: 291
	I0805 16:21:21.017502    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:21 GMT
	I0805 16:21:21.017504    4640 round_trippers.go:580]     Audit-Id: fb264fed-80ee-469b-a34e-7b1e8460f94b
	I0805 16:21:21.017506    4640 round_trippers.go:580]     Audit-Id: c9362211-8dfc-4385-87db-76c6486df53e
	I0805 16:21:21.017512    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:21.017513    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:21.017518    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:21.017519    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:21.017522    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:21.017524    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:21.017529    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:21.017545    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:21 GMT
	I0805 16:21:21.017616    4640 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7bdcac2f-ecae-4bb5-9dd4-4f2479d63a63","resourceVersion":"395","creationTimestamp":"2024-08-05T23:21:06Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0805 16:21:21.017684    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:21.017735    4640 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-985000" context rescaled to 1 replicas
	I0805 16:21:21.514170    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:21.514200    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:21.514219    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:21.514226    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:21.516804    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:21.516819    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:21.516826    4640 round_trippers.go:580]     Audit-Id: 9396255c-231d-48cb-a53f-22663307b969
	I0805 16:21:21.516830    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:21.516834    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:21.516839    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:21.516849    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:21.516854    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:21 GMT
	I0805 16:21:21.516951    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:22.013275    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:22.013299    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:22.013311    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:22.013319    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:22.016138    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:22.016155    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:22.016163    4640 round_trippers.go:580]     Audit-Id: cc869aef-9ab4-4a7f-8835-cce2afa76dd9
	I0805 16:21:22.016168    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:22.016175    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:22.016182    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:22.016187    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:22.016193    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:22 GMT
	I0805 16:21:22.016497    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:22.512546    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:22.512561    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:22.512567    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:22.512572    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:22.515381    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:22.515393    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:22.515401    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:22.515407    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:22.515412    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:22.515416    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:22 GMT
	I0805 16:21:22.515420    4640 round_trippers.go:580]     Audit-Id: e7d470a0-7df5-4d85-9bb5-cbf15cfa989f
	I0805 16:21:22.515423    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:22.515634    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:22.515838    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:23.012594    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:23.012606    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:23.012612    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:23.012616    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:23.014085    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:23.014095    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:23.014101    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:23.014104    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:23.014107    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:23.014109    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:23.014113    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:23 GMT
	I0805 16:21:23.014116    4640 round_trippers.go:580]     Audit-Id: e12d5034-3bd9-498b-844e-12133805ded9
	I0805 16:21:23.014306    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:23.513150    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:23.513163    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:23.513168    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:23.513172    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:23.514595    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:23.514604    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:23.514610    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:23.514614    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:23.514617    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:23.514619    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:23.514622    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:23 GMT
	I0805 16:21:23.514635    4640 round_trippers.go:580]     Audit-Id: 2bc52e3b-1575-453f-87fa-51f4301a9426
	I0805 16:21:23.514871    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:24.012814    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:24.012826    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:24.012832    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:24.012835    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:24.014366    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:24.014379    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:24.014384    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:24.014388    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:24.014406    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:24.014411    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:24.014414    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:24 GMT
	I0805 16:21:24.014417    4640 round_trippers.go:580]     Audit-Id: f14d8611-e5e1-45fe-92f3-95559148c71b
	I0805 16:21:24.014572    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:24.513607    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:24.513620    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:24.513626    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:24.513629    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:24.515210    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:24.515220    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:24.515242    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:24.515253    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:24.515260    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:24.515264    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:24.515268    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:24 GMT
	I0805 16:21:24.515271    4640 round_trippers.go:580]     Audit-Id: 0a897d84-d437-4212-b36d-e414fedf55d4
	I0805 16:21:24.515427    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:25.013253    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:25.013272    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:25.013283    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:25.013321    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:25.015275    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:25.015308    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:25.015317    4640 round_trippers.go:580]     Audit-Id: ced7b45c-a072-4322-89ab-d0cc21ddfb1d
	I0805 16:21:25.015322    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:25.015325    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:25.015328    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:25.015332    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:25.015336    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:25 GMT
	I0805 16:21:25.015627    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:25.015849    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:25.512881    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:25.512902    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:25.512914    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:25.512920    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:25.515502    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:25.515517    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:25.515524    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:25.515529    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:25.515534    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:25.515538    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:25.515542    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:25 GMT
	I0805 16:21:25.515545    4640 round_trippers.go:580]     Audit-Id: dd6b59c1-dde3-4d67-b446-8823ad717d4f
	I0805 16:21:25.515665    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:26.013787    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:26.013811    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:26.013824    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:26.013830    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:26.016420    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:26.016440    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:26.016463    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:26 GMT
	I0805 16:21:26.016470    4640 round_trippers.go:580]     Audit-Id: 19939705-2879-44e6-830c-0c86394087ed
	I0805 16:21:26.016473    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:26.016485    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:26.016490    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:26.016494    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:26.016965    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:26.512523    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:26.512536    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:26.512541    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:26.512544    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:26.514158    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:26.514167    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:26.514172    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:26.514176    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:26.514179    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:26.514182    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:26 GMT
	I0805 16:21:26.514184    4640 round_trippers.go:580]     Audit-Id: f2346665-2701-41e1-94b0-41a70aa2f170
	I0805 16:21:26.514187    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:26.514489    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:27.013107    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:27.013136    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:27.013148    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:27.013155    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:27.015615    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:27.015632    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:27.015639    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:27 GMT
	I0805 16:21:27.015655    4640 round_trippers.go:580]     Audit-Id: 6abee22d-c1db-48e9-99db-e07791ed571f
	I0805 16:21:27.015661    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:27.015664    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:27.015667    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:27.015672    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:27.015747    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:27.015996    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:27.513549    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:27.513570    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:27.513582    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:27.513589    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:27.516173    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:27.516189    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:27.516197    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:27.516200    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:27.516204    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:27.516209    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:27 GMT
	I0805 16:21:27.516212    4640 round_trippers.go:580]     Audit-Id: a227585b-ae23-4bd1-b1dc-643eadd970cc
	I0805 16:21:27.516215    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:27.516416    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:28.014104    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:28.014132    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:28.014143    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:28.014159    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:28.016690    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:28.016705    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:28.016713    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:28.016717    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:28.016721    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:28.016725    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:28.016728    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:28 GMT
	I0805 16:21:28.016731    4640 round_trippers.go:580]     Audit-Id: 0d14831c-cc1f-41a9-a252-85e191b9594d
	I0805 16:21:28.016834    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:28.512703    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:28.512726    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:28.512739    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:28.512747    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:28.515176    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:28.515190    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:28.515197    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:28 GMT
	I0805 16:21:28.515201    4640 round_trippers.go:580]     Audit-Id: 6af459f8-bb08-43bf-ac7f-51ccacd5d664
	I0805 16:21:28.515206    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:28.515211    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:28.515215    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:28.515219    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:28.515378    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:29.013324    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:29.013354    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:29.013360    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:29.013364    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:29.014793    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:29.014804    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:29.014809    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:29.014813    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:29 GMT
	I0805 16:21:29.014817    4640 round_trippers.go:580]     Audit-Id: 2e50ff34-0c55-4136-b537-eee73f73706d
	I0805 16:21:29.014819    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:29.014822    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:29.014826    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:29.015098    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:29.513802    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:29.513832    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:29.513844    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:29.513852    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:29.516479    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:29.516496    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:29.516504    4640 round_trippers.go:580]     Audit-Id: bcbc3920-26b4-45f4-b91a-ce0e3dc11770
	I0805 16:21:29.516529    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:29.516538    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:29.516544    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:29.516549    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:29.516554    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:29 GMT
	I0805 16:21:29.516682    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:29.516938    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:30.013325    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:30.013349    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:30.013436    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:30.013448    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:30.016209    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:30.016222    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:30.016228    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:30.016233    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:30 GMT
	I0805 16:21:30.016238    4640 round_trippers.go:580]     Audit-Id: fb0bd3e0-89c3-4c77-a27d-be315cab22b7
	I0805 16:21:30.016242    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:30.016277    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:30.016283    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:30.016477    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:30.514344    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:30.514386    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:30.514482    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:30.514494    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:30.518828    4640 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 16:21:30.518860    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:30.518870    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:30.518876    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:30.518882    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:30 GMT
	I0805 16:21:30.518888    4640 round_trippers.go:580]     Audit-Id: c1b08932-ee78-4dcb-a190-3a8b24421284
	I0805 16:21:30.518894    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:30.518899    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:30.519002    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:31.012673    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:31.012701    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:31.012712    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:31.012718    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:31.015543    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:31.015560    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:31.015568    4640 round_trippers.go:580]     Audit-Id: b6586a64-ec07-44ee-8a00-1f3b8a00e0bd
	I0805 16:21:31.015572    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:31.015576    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:31.015580    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:31.015583    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:31.015589    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:31 GMT
	I0805 16:21:31.015682    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:31.512531    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:31.512543    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:31.512550    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:31.512554    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:31.514066    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:31.514076    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:31.514081    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:31 GMT
	I0805 16:21:31.514085    4640 round_trippers.go:580]     Audit-Id: 7d410de7-b0d5-4d4e-8455-d31b0df7d302
	I0805 16:21:31.514089    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:31.514093    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:31.514096    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:31.514107    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:31.514758    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:32.014110    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:32.014136    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:32.014147    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:32.014157    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:32.016553    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:32.016570    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:32.016580    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:32.016586    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:32.016592    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:32.016598    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:32.016602    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:32 GMT
	I0805 16:21:32.016605    4640 round_trippers.go:580]     Audit-Id: 67fdb64b-273a-46c2-aac5-c3b115422aa4
	I0805 16:21:32.016861    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:32.017132    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:32.513171    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:32.513188    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:32.513195    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:32.513198    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:32.514908    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:32.514920    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:32.514925    4640 round_trippers.go:580]     Audit-Id: 0f5a2e98-6be6-4963-8897-91c70642048c
	I0805 16:21:32.514928    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:32.514931    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:32.514933    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:32.514936    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:32.514939    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:32 GMT
	I0805 16:21:32.515082    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:33.013769    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:33.013803    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:33.013814    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:33.013822    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:33.016491    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:33.016509    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:33.016519    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:33 GMT
	I0805 16:21:33.016526    4640 round_trippers.go:580]     Audit-Id: 96b5f269-7be9-42a9-9687-cba57d05f76e
	I0805 16:21:33.016532    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:33.016538    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:33.016543    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:33.016548    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:33.016715    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:33.512751    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:33.512772    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:33.512783    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:33.512789    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:33.515431    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:33.515480    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:33.515498    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:33.515506    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:33.515510    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:33 GMT
	I0805 16:21:33.515513    4640 round_trippers.go:580]     Audit-Id: 6cd252a3-d07d-441e-bcf4-bc3bd00c2488
	I0805 16:21:33.515517    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:33.515520    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:33.515747    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:34.013003    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:34.013032    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:34.013043    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:34.013052    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:34.015447    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:34.015465    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:34.015472    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:34.015476    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:34.015479    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:34.015484    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:34.015487    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:34 GMT
	I0805 16:21:34.015492    4640 round_trippers.go:580]     Audit-Id: efcfb0d1-8345-4db5-bce9-e31085842da3
	I0805 16:21:34.015599    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:34.513298    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:34.513317    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:34.513376    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:34.513383    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:34.515051    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:34.515065    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:34.515072    4640 round_trippers.go:580]     Audit-Id: 2a42cb6a-0051-47bd-85f4-9f8ca80afa70
	I0805 16:21:34.515078    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:34.515081    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:34.515087    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:34.515099    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:34.515103    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:34 GMT
	I0805 16:21:34.515359    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:34.515540    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:35.013932    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:35.013957    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:35.013968    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:35.013976    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:35.016505    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:35.016524    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:35.016530    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:35.016537    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:35.016541    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:35.016544    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:35.016555    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:35 GMT
	I0805 16:21:35.016559    4640 round_trippers.go:580]     Audit-Id: 09fa0e04-c026-439e-9cd7-392fd82b16fe
	I0805 16:21:35.016913    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:35.513491    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:35.513514    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:35.513526    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:35.513532    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:35.515995    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:35.516012    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:35.516020    4640 round_trippers.go:580]     Audit-Id: a2b05a8a-9a91-4d20-93d0-b8701ac59b95
	I0805 16:21:35.516024    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:35.516036    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:35.516041    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:35.516055    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:35.516060    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:35 GMT
	I0805 16:21:35.516151    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:36.013521    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:36.013549    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.013561    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.013566    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.016095    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:36.016112    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.016119    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.016131    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.016136    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.016140    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.016144    4640 round_trippers.go:580]     Audit-Id: 77e04f39-a037-4ea2-9716-ad04139089d1
	I0805 16:21:36.016147    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.016230    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"423","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0805 16:21:36.016465    4640 node_ready.go:49] node "multinode-985000" has status "Ready":"True"
	I0805 16:21:36.016481    4640 node_ready.go:38] duration metric: took 15.504115701s for node "multinode-985000" to be "Ready" ...
	I0805 16:21:36.016489    4640 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:21:36.016543    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:36.016551    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.016559    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.016563    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.019046    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:36.019057    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.019065    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.019069    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.019078    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.019081    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.019084    4640 round_trippers.go:580]     Audit-Id: 96048303-6e62-4ba8-a291-bc1ad976756e
	I0805 16:21:36.019091    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.019721    4640 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"427","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56289 chars]
	I0805 16:21:36.021921    4640 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:36.021960    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:21:36.021964    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.021970    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.021974    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.023179    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:36.023187    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.023192    4640 round_trippers.go:580]     Audit-Id: ba42f387-f106-4773-86de-3a22085fd86a
	I0805 16:21:36.023195    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.023198    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.023200    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.023204    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.023208    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.023410    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"427","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0805 16:21:36.023652    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:36.023659    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.023665    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.023671    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.024732    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:36.024744    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.024752    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.024758    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.024765    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.024768    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.024771    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.024775    4640 round_trippers.go:580]     Audit-Id: 2008721c-b230-4e73-b037-d3a843d7c7c8
	I0805 16:21:36.024909    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"423","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0805 16:21:36.523495    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:21:36.523508    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.523514    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.523519    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.525003    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:36.525014    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.525020    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.525042    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.525049    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.525053    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.525060    4640 round_trippers.go:580]     Audit-Id: 1ad5a8dd-64b3-4881-9a8e-e5eaab368c53
	I0805 16:21:36.525066    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.525202    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"427","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0805 16:21:36.525483    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:36.525490    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.525498    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.525502    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.526801    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:36.526810    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.526814    4640 round_trippers.go:580]     Audit-Id: 71c4017f-a267-489e-86ed-59098eae3b88
	I0805 16:21:36.526817    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.526834    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.526840    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.526846    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.526850    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.527025    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"423","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0805 16:21:37.022759    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:21:37.022781    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.022791    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.022799    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.025487    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:37.025503    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.025510    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.025515    4640 round_trippers.go:580]     Audit-Id: 7446d9fd-22ed-4d20-b0f2-e8c4a88b04f4
	I0805 16:21:37.025536    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.025543    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.025547    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.025556    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.025649    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"427","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0805 16:21:37.026010    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.026020    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.026028    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.026033    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.027337    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:37.027346    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.027354    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.027359    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.027363    4640 round_trippers.go:580]     Audit-Id: a309eed4-f088-47f7-8b84-4761b59dbb8c
	I0805 16:21:37.027366    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.027368    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.027371    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.027425    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.522283    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:21:37.522304    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.522315    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.522322    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.524762    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:37.524776    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.524782    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.524788    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.524792    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.524795    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.524799    4640 round_trippers.go:580]     Audit-Id: eaef42a8-7b43-4091-9b70-8d31adc979e5
	I0805 16:21:37.524803    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.525073    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"443","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6576 chars]
	I0805 16:21:37.525438    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.525480    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.525488    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.525492    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.526890    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:37.526903    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.526912    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.526918    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.526927    4640 round_trippers.go:580]     Audit-Id: a3a0e71a-c982-4504-9fae-e76101688c05
	I0805 16:21:37.526931    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.526935    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.526937    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.527034    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.527211    4640 pod_ready.go:92] pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.527220    4640 pod_ready.go:81] duration metric: took 1.505289062s for pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.527230    4640 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.527259    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-985000
	I0805 16:21:37.527264    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.527269    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.527277    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.528379    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:37.528389    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.528394    4640 round_trippers.go:580]     Audit-Id: 3cf4f372-47fb-4b72-9b30-185d93d01537
	I0805 16:21:37.528401    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.528405    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.528408    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.528411    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.528414    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.528618    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-985000","namespace":"kube-system","uid":"8d7ca2d9-8c7b-41b9-a199-de6449107471","resourceVersion":"379","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.mirror":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.seen":"2024-08-05T23:21:06.366030299Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0805 16:21:37.528833    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.528840    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.528845    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.528850    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.529802    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.529808    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.529813    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.529816    4640 round_trippers.go:580]     Audit-Id: 314df0bd-894e-4607-bad0-3348c18fe807
	I0805 16:21:37.529820    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.529823    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.529826    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.529833    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.530046    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.530203    4640 pod_ready.go:92] pod "etcd-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.530210    4640 pod_ready.go:81] duration metric: took 2.974841ms for pod "etcd-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.530218    4640 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.530249    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-985000
	I0805 16:21:37.530253    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.530259    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.530262    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.531449    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:37.531456    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.531461    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.531463    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.531467    4640 round_trippers.go:580]     Audit-Id: 1801a8f0-22d5-44e8-942c-ea521b1ffa66
	I0805 16:21:37.531469    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.531475    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.531477    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.531592    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-985000","namespace":"kube-system","uid":"9be3378a-5fab-4907-baad-507918e714e4","resourceVersion":"369","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"5908531d711118eab279d6b15448dc42","kubernetes.io/config.mirror":"5908531d711118eab279d6b15448dc42","kubernetes.io/config.seen":"2024-08-05T23:21:06.366030949Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7684 chars]
	I0805 16:21:37.531810    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.531820    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.531825    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.531830    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.532663    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.532668    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.532672    4640 round_trippers.go:580]     Audit-Id: 6d0fc4ed-c609-4ee7-a57f-b61eed1bc442
	I0805 16:21:37.532675    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.532679    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.532682    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.532684    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.532688    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.532807    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.532958    4640 pod_ready.go:92] pod "kube-apiserver-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.532967    4640 pod_ready.go:81] duration metric: took 2.743443ms for pod "kube-apiserver-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.532973    4640 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.533000    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-985000
	I0805 16:21:37.533004    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.533009    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.533012    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.533987    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.533995    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.534000    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.534004    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.534020    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.534027    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.534031    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.534034    4640 round_trippers.go:580]     Audit-Id: 97e4dc5c-f4bf-419e-8b15-be800418054c
	I0805 16:21:37.534147    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-985000","namespace":"kube-system","uid":"4ad64361-65de-4b0b-b2a3-07df18c2e603","resourceVersion":"342","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e41fb21b40cd2f3bd83b000891f6569","kubernetes.io/config.mirror":"8e41fb21b40cd2f3bd83b000891f6569","kubernetes.io/config.seen":"2024-08-05T23:21:06.366027130Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7259 chars]
	I0805 16:21:37.534370    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.534377    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.534383    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.534386    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.535293    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.535301    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.535305    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.535308    4640 round_trippers.go:580]     Audit-Id: a4c04a0a-9401-41d1-a0fc-f2a2187abde4
	I0805 16:21:37.535310    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.535313    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.535320    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.535323    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.535432    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.535591    4640 pod_ready.go:92] pod "kube-controller-manager-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.535599    4640 pod_ready.go:81] duration metric: took 2.621545ms for pod "kube-controller-manager-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.535606    4640 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fwgw7" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.535629    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwgw7
	I0805 16:21:37.535634    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.535639    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.535643    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.536550    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.536557    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.536565    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.536570    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.536575    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.536578    4640 round_trippers.go:580]     Audit-Id: 5a688e80-7db3-4070-a1a8-c3419ddb4d44
	I0805 16:21:37.536580    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.536582    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.536704    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fwgw7","generateName":"kube-proxy-","namespace":"kube-system","uid":"3fb72e39-699d-4123-ae5e-e314a191d904","resourceVersion":"409","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8b6258e6-7b31-4600-b32b-4a269867c123","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8b6258e6-7b31-4600-b32b-4a269867c123\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0805 16:21:37.614745    4640 request.go:629] Waited for 77.807971ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.614815    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.614822    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.614839    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.614845    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.616956    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:37.616984    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.616989    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.616993    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.616996    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.616999    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.617002    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.617005    4640 round_trippers.go:580]     Audit-Id: e297627c-4c52-417b-935c-d406bf086c16
	I0805 16:21:37.617232    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.617428    4640 pod_ready.go:92] pod "kube-proxy-fwgw7" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.617437    4640 pod_ready.go:81] duration metric: took 81.82693ms for pod "kube-proxy-fwgw7" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.617444    4640 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.815296    4640 request.go:629] Waited for 197.761592ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-985000
	I0805 16:21:37.815347    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-985000
	I0805 16:21:37.815355    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.815366    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.815376    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.817961    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:37.817976    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.818001    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.818008    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:37.818049    4640 round_trippers.go:580]     Audit-Id: cc44c4e8-8012-4718-aa24-c05fec399a2e
	I0805 16:21:37.818064    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.818078    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.818082    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.818186    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-985000","namespace":"kube-system","uid":"5e23b1b7-e45d-4b43-831c-aa835c5e536d","resourceVersion":"396","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d110ae14602908970c81c0d8a5c21147","kubernetes.io/config.mirror":"d110ae14602908970c81c0d8a5c21147","kubernetes.io/config.seen":"2024-08-05T23:21:06.366029633Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0805 16:21:38.014472    4640 request.go:629] Waited for 195.947535ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:38.014569    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:38.014578    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.014589    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.014597    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.017395    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:38.017406    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.017413    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.017418    4640 round_trippers.go:580]     Audit-Id: 925efcbc-f43b-4431-905e-26927bb76a48
	I0805 16:21:38.017422    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.017428    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.017434    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.017441    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.017905    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:38.018153    4640 pod_ready.go:92] pod "kube-scheduler-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:38.018164    4640 pod_ready.go:81] duration metric: took 400.713995ms for pod "kube-scheduler-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:38.018173    4640 pod_ready.go:38] duration metric: took 2.001673669s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:21:38.018198    4640 api_server.go:52] waiting for apiserver process to appear ...
	I0805 16:21:38.018268    4640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:21:38.030133    4640 command_runner.go:130] > 1977
	I0805 16:21:38.030360    4640 api_server.go:72] duration metric: took 18.07694495s to wait for apiserver process to appear ...
	I0805 16:21:38.030369    4640 api_server.go:88] waiting for apiserver healthz status ...
	I0805 16:21:38.030384    4640 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:21:38.034009    4640 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0805 16:21:38.034048    4640 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I0805 16:21:38.034052    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.034058    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.034063    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.034646    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:38.034653    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.034658    4640 round_trippers.go:580]     Audit-Id: 9f5c9766-330c-4bb5-a5de-4c3a0fdbe474
	I0805 16:21:38.034662    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.034665    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.034668    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.034670    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.034673    4640 round_trippers.go:580]     Content-Length: 263
	I0805 16:21:38.034676    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.034687    4640 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0805 16:21:38.034733    4640 api_server.go:141] control plane version: v1.30.3
	I0805 16:21:38.034742    4640 api_server.go:131] duration metric: took 4.369143ms to wait for apiserver health ...
	I0805 16:21:38.034747    4640 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 16:21:38.213812    4640 request.go:629] Waited for 178.999213ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:38.213950    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:38.213960    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.213970    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.213980    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.217309    4640 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:21:38.217324    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.217331    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.217336    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.217363    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.217372    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.217377    4640 round_trippers.go:580]     Audit-Id: 0f21513f-44e7-4d2f-bacd-2a12fceef757
	I0805 16:21:38.217381    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.217979    4640 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"443","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0805 16:21:38.219249    4640 system_pods.go:59] 8 kube-system pods found
	I0805 16:21:38.219261    4640 system_pods.go:61] "coredns-7db6d8ff4d-fqtll" [4d8af129-475b-4185-8b0d-cbda67812964] Running
	I0805 16:21:38.219265    4640 system_pods.go:61] "etcd-multinode-985000" [8d7ca2d9-8c7b-41b9-a199-de6449107471] Running
	I0805 16:21:38.219268    4640 system_pods.go:61] "kindnet-tvtvg" [7dd4afe7-2a17-4298-823b-9955e43cfdb2] Running
	I0805 16:21:38.219271    4640 system_pods.go:61] "kube-apiserver-multinode-985000" [9be3378a-5fab-4907-baad-507918e714e4] Running
	I0805 16:21:38.219276    4640 system_pods.go:61] "kube-controller-manager-multinode-985000" [4ad64361-65de-4b0b-b2a3-07df18c2e603] Running
	I0805 16:21:38.219278    4640 system_pods.go:61] "kube-proxy-fwgw7" [3fb72e39-699d-4123-ae5e-e314a191d904] Running
	I0805 16:21:38.219280    4640 system_pods.go:61] "kube-scheduler-multinode-985000" [5e23b1b7-e45d-4b43-831c-aa835c5e536d] Running
	I0805 16:21:38.219283    4640 system_pods.go:61] "storage-provisioner" [72ec8458-5c62-43eb-9120-0146e6ccaf8f] Running
	I0805 16:21:38.219286    4640 system_pods.go:74] duration metric: took 184.535842ms to wait for pod list to return data ...
	I0805 16:21:38.219291    4640 default_sa.go:34] waiting for default service account to be created ...
	I0805 16:21:38.413643    4640 request.go:629] Waited for 194.308242ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:21:38.413680    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:21:38.413687    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.413695    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.413699    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.415522    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:38.415531    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.415536    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.415539    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.415543    4640 round_trippers.go:580]     Content-Length: 261
	I0805 16:21:38.415546    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.415548    4640 round_trippers.go:580]     Audit-Id: efc85c0c-9bbc-4cb7-8c14-19ba2f873800
	I0805 16:21:38.415551    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.415553    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.415563    4640 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b0626468-f73b-4e9b-8270-658495d43f4a","resourceVersion":"337","creationTimestamp":"2024-08-05T23:21:19Z"}}]}
	I0805 16:21:38.415681    4640 default_sa.go:45] found service account: "default"
	I0805 16:21:38.415690    4640 default_sa.go:55] duration metric: took 196.394719ms for default service account to be created ...
	I0805 16:21:38.415697    4640 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 16:21:38.613742    4640 request.go:629] Waited for 198.012461ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:38.613858    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:38.613864    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.613870    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.613874    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.616077    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:38.616090    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.616099    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.616106    4640 round_trippers.go:580]     Audit-Id: 3f8a6f23-788b-41c4-8dee-6ff59c02c21d
	I0805 16:21:38.616112    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.616116    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.616126    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.616143    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.616489    4640 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"443","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0805 16:21:38.617747    4640 system_pods.go:86] 8 kube-system pods found
	I0805 16:21:38.617761    4640 system_pods.go:89] "coredns-7db6d8ff4d-fqtll" [4d8af129-475b-4185-8b0d-cbda67812964] Running
	I0805 16:21:38.617766    4640 system_pods.go:89] "etcd-multinode-985000" [8d7ca2d9-8c7b-41b9-a199-de6449107471] Running
	I0805 16:21:38.617770    4640 system_pods.go:89] "kindnet-tvtvg" [7dd4afe7-2a17-4298-823b-9955e43cfdb2] Running
	I0805 16:21:38.617773    4640 system_pods.go:89] "kube-apiserver-multinode-985000" [9be3378a-5fab-4907-baad-507918e714e4] Running
	I0805 16:21:38.617776    4640 system_pods.go:89] "kube-controller-manager-multinode-985000" [4ad64361-65de-4b0b-b2a3-07df18c2e603] Running
	I0805 16:21:38.617780    4640 system_pods.go:89] "kube-proxy-fwgw7" [3fb72e39-699d-4123-ae5e-e314a191d904] Running
	I0805 16:21:38.617784    4640 system_pods.go:89] "kube-scheduler-multinode-985000" [5e23b1b7-e45d-4b43-831c-aa835c5e536d] Running
	I0805 16:21:38.617787    4640 system_pods.go:89] "storage-provisioner" [72ec8458-5c62-43eb-9120-0146e6ccaf8f] Running
	I0805 16:21:38.617792    4640 system_pods.go:126] duration metric: took 202.090644ms to wait for k8s-apps to be running ...
	I0805 16:21:38.617801    4640 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 16:21:38.617848    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:21:38.629448    4640 system_svc.go:56] duration metric: took 11.643357ms WaitForService to wait for kubelet
	I0805 16:21:38.629463    4640 kubeadm.go:582] duration metric: took 18.676048708s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:21:38.629475    4640 node_conditions.go:102] verifying NodePressure condition ...
	I0805 16:21:38.814057    4640 request.go:629] Waited for 184.539621ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I0805 16:21:38.814182    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0805 16:21:38.814193    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.814205    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.814213    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.817076    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:38.817092    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.817099    4640 round_trippers.go:580]     Audit-Id: 83bb2c88-8ae3-45b7-a0f6-9d3f9fead5f2
	I0805 16:21:38.817103    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.817112    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.817116    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.817123    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.817128    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:39 GMT
	I0805 16:21:38.817200    4640 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5011 chars]
	I0805 16:21:38.817474    4640 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:21:38.817490    4640 node_conditions.go:123] node cpu capacity is 2
	I0805 16:21:38.817502    4640 node_conditions.go:105] duration metric: took 188.023135ms to run NodePressure ...
	I0805 16:21:38.817512    4640 start.go:241] waiting for startup goroutines ...
	I0805 16:21:38.817520    4640 start.go:246] waiting for cluster config update ...
	I0805 16:21:38.817530    4640 start.go:255] writing updated cluster config ...
	I0805 16:21:38.838343    4640 out.go:177] 
	I0805 16:21:38.859405    4640 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:21:38.859465    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:21:38.881260    4640 out.go:177] * Starting "multinode-985000-m02" worker node in "multinode-985000" cluster
	I0805 16:21:38.923226    4640 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:21:38.923254    4640 cache.go:56] Caching tarball of preloaded images
	I0805 16:21:38.923425    4640 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:21:38.923439    4640 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:21:38.923503    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:21:38.924257    4640 start.go:360] acquireMachinesLock for multinode-985000-m02: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:21:38.924355    4640 start.go:364] duration metric: took 78.775µs to acquireMachinesLock for "multinode-985000-m02"
	I0805 16:21:38.924379    4640 start.go:93] Provisioning new machine with config: &{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0805 16:21:38.924443    4640 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0805 16:21:38.946258    4640 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 16:21:38.946431    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:38.946482    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:38.956315    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52515
	I0805 16:21:38.956651    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:38.957008    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:38.957028    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:38.957245    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:38.957408    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:21:38.957527    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:38.957642    4640 start.go:159] libmachine.API.Create for "multinode-985000" (driver="hyperkit")
	I0805 16:21:38.957663    4640 client.go:168] LocalClient.Create starting
	I0805 16:21:38.957697    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem
	I0805 16:21:38.957735    4640 main.go:141] libmachine: Decoding PEM data...
	I0805 16:21:38.957747    4640 main.go:141] libmachine: Parsing certificate...
	I0805 16:21:38.957790    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem
	I0805 16:21:38.957819    4640 main.go:141] libmachine: Decoding PEM data...
	I0805 16:21:38.957833    4640 main.go:141] libmachine: Parsing certificate...
	I0805 16:21:38.957849    4640 main.go:141] libmachine: Running pre-create checks...
	I0805 16:21:38.957855    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .PreCreateCheck
	I0805 16:21:38.957933    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:38.957959    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetConfigRaw
	I0805 16:21:38.967700    4640 main.go:141] libmachine: Creating machine...
	I0805 16:21:38.967725    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .Create
	I0805 16:21:38.967957    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:38.968233    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | I0805 16:21:38.967940    4677 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:21:38.968338    4640 main.go:141] libmachine: (multinode-985000-m02) Downloading /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1122/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 16:21:39.171726    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | I0805 16:21:39.171650    4677 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa...
	I0805 16:21:39.251408    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | I0805 16:21:39.251327    4677 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/multinode-985000-m02.rawdisk...
	I0805 16:21:39.251421    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Writing magic tar header
	I0805 16:21:39.251439    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Writing SSH key tar header
	I0805 16:21:39.252021    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | I0805 16:21:39.251983    4677 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02 ...
	I0805 16:21:39.622286    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:39.622309    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid
	I0805 16:21:39.622382    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Using UUID ab5b9c9f-9e28-4bc2-8fcd-b98fce011173
	I0805 16:21:39.647304    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Generated MAC a6:1c:88:9c:44:3
	I0805 16:21:39.647324    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000
	I0805 16:21:39.647363    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0805 16:21:39.647396    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0805 16:21:39.647440    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/multinode-985000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage,/Users/j
enkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"}
	I0805 16:21:39.647475    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ab5b9c9f-9e28-4bc2-8fcd-b98fce011173 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/multinode-985000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/mult
inode-985000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"
	I0805 16:21:39.647493    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:21:39.650407    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: Pid is 4678
	I0805 16:21:39.650823    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 0
	I0805 16:21:39.650838    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:39.650909    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:39.651807    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:39.651870    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:39.651899    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:39.651984    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:39.652006    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:39.652022    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:39.652032    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:39.652039    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:39.652046    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:39.652082    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:39.652100    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:39.652113    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:39.652123    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:39.652143    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:39.657903    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:21:39.666018    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:21:39.666937    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:21:39.666963    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:21:39.666975    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:21:39.666990    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:21:40.050205    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:21:40.050221    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:21:40.165006    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:21:40.165028    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:21:40.165042    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:21:40.165049    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:21:40.165899    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:21:40.165911    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:21:41.653048    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 1
	I0805 16:21:41.653066    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:41.653144    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:41.653911    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:41.653968    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:41.653979    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:41.653992    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:41.653998    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:41.654006    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:41.654015    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:41.654030    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:41.654045    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:41.654053    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:41.654061    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:41.654070    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:41.654078    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:41.654093    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:43.655366    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 2
	I0805 16:21:43.655382    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:43.655471    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:43.656243    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:43.656291    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:43.656301    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:43.656319    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:43.656329    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:43.656351    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:43.656362    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:43.656369    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:43.656375    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:43.656391    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:43.656406    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:43.656416    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:43.656423    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:43.656437    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:45.657345    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 3
	I0805 16:21:45.657361    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:45.657459    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:45.658214    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:45.658269    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:45.658278    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:45.658286    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:45.658295    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:45.658310    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:45.658321    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:45.658329    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:45.658337    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:45.658349    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:45.658362    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:45.658370    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:45.658378    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:45.658387    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:45.751756    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:45 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:21:45.751812    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:45 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:21:45.751830    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:45 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:21:45.774801    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:45 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:21:47.659182    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 4
	I0805 16:21:47.659208    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:47.659291    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:47.660062    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:47.660112    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:47.660128    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:47.660137    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:47.660145    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:47.660153    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:47.660162    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:47.660178    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:47.660192    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:47.660204    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:47.660218    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:47.660230    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:47.660240    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:47.660260    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:49.662115    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 5
	I0805 16:21:49.662148    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:49.662310    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:49.663748    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:49.663812    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I0805 16:21:49.663831    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b00c}
	I0805 16:21:49.663846    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found match: a6:1c:88:9c:44:3
	I0805 16:21:49.663856    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | IP: 192.169.0.14
	I0805 16:21:49.663945    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetConfigRaw
	I0805 16:21:49.664855    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:49.665006    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:49.665127    4640 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 16:21:49.665139    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetState
	I0805 16:21:49.665271    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:49.665344    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:49.666326    4640 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 16:21:49.666337    4640 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 16:21:49.666342    4640 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 16:21:49.666348    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.666471    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:49.666603    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.666743    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.666869    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:49.667045    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:49.667279    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:49.667287    4640 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 16:21:49.724369    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:21:49.724382    4640 main.go:141] libmachine: Detecting the provisioner...
	I0805 16:21:49.724388    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.724522    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:49.724626    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.724719    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.724810    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:49.724938    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:49.725087    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:49.725094    4640 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 16:21:49.782403    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 16:21:49.782454    4640 main.go:141] libmachine: found compatible host: buildroot
	I0805 16:21:49.782460    4640 main.go:141] libmachine: Provisioning with buildroot...
	I0805 16:21:49.782466    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:21:49.782595    4640 buildroot.go:166] provisioning hostname "multinode-985000-m02"
	I0805 16:21:49.782606    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:21:49.782698    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.782797    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:49.782871    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.782964    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.783079    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:49.783204    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:49.783350    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:49.783359    4640 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-985000-m02 && echo "multinode-985000-m02" | sudo tee /etc/hostname
	I0805 16:21:49.854175    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-985000-m02
	
	I0805 16:21:49.854190    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.854319    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:49.854421    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.854492    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.854587    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:49.854712    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:49.854870    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:49.854882    4640 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-985000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-985000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-985000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:21:49.917814    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:21:49.917830    4640 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:21:49.917840    4640 buildroot.go:174] setting up certificates
	I0805 16:21:49.917846    4640 provision.go:84] configureAuth start
	I0805 16:21:49.917856    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:21:49.917985    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:21:49.918095    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.918192    4640 provision.go:143] copyHostCerts
	I0805 16:21:49.918223    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:21:49.918280    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:21:49.918285    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:21:49.918411    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:21:49.918617    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:21:49.918652    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:21:49.918658    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:21:49.918733    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:21:49.918888    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:21:49.918922    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:21:49.918927    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:21:49.918994    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:21:49.919145    4640 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.multinode-985000-m02 san=[127.0.0.1 192.169.0.14 localhost minikube multinode-985000-m02]
	I0805 16:21:50.072896    4640 provision.go:177] copyRemoteCerts
	I0805 16:21:50.072947    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:21:50.072962    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:50.073107    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:50.073199    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.073317    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:50.073426    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:21:50.108446    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:21:50.108519    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:21:50.128617    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:21:50.128684    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0805 16:21:50.148653    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:21:50.148720    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:21:50.168682    4640 provision.go:87] duration metric: took 250.828344ms to configureAuth
	I0805 16:21:50.168695    4640 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:21:50.168835    4640 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:21:50.168849    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:50.168993    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:50.169087    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:50.169175    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.169262    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.169345    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:50.169486    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:50.169621    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:50.169628    4640 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:21:50.228062    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:21:50.228074    4640 buildroot.go:70] root file system type: tmpfs
	I0805 16:21:50.228150    4640 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:21:50.228164    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:50.228293    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:50.228388    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.228480    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.228586    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:50.228755    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:50.228888    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:50.228934    4640 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.13"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:21:50.296901    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.13
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:21:50.296919    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:50.297064    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:50.297158    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.297250    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.297333    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:50.297475    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:50.297611    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:50.297624    4640 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:21:51.873922    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:21:51.873940    4640 main.go:141] libmachine: Checking connection to Docker...
	I0805 16:21:51.873964    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetURL
	I0805 16:21:51.874107    4640 main.go:141] libmachine: Docker is up and running!
	I0805 16:21:51.874115    4640 main.go:141] libmachine: Reticulating splines...
	I0805 16:21:51.874120    4640 client.go:171] duration metric: took 12.916447572s to LocalClient.Create
	I0805 16:21:51.874129    4640 start.go:167] duration metric: took 12.916485141s to libmachine.API.Create "multinode-985000"
	I0805 16:21:51.874135    4640 start.go:293] postStartSetup for "multinode-985000-m02" (driver="hyperkit")
	I0805 16:21:51.874142    4640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:21:51.874152    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:51.874292    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:21:51.874313    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:51.874416    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:51.874505    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:51.874583    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:51.874657    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:21:51.915394    4640 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:21:51.919538    4640 command_runner.go:130] > NAME=Buildroot
	I0805 16:21:51.919549    4640 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0805 16:21:51.919553    4640 command_runner.go:130] > ID=buildroot
	I0805 16:21:51.919557    4640 command_runner.go:130] > VERSION_ID=2023.02.9
	I0805 16:21:51.919560    4640 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0805 16:21:51.919635    4640 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:21:51.919645    4640 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:21:51.919746    4640 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:21:51.919897    4640 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:21:51.919903    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:21:51.920070    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:21:51.929531    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:21:51.959146    4640 start.go:296] duration metric: took 85.003807ms for postStartSetup
	I0805 16:21:51.959174    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetConfigRaw
	I0805 16:21:51.959830    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:21:51.959996    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:21:51.960355    4640 start.go:128] duration metric: took 13.03589336s to createHost
	I0805 16:21:51.960370    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:51.960461    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:51.960532    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:51.960607    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:51.960679    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:51.960792    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:51.960921    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:51.960928    4640 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 16:21:52.018527    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722900112.019707412
	
	I0805 16:21:52.018539    4640 fix.go:216] guest clock: 1722900112.019707412
	I0805 16:21:52.018544    4640 fix.go:229] Guest: 2024-08-05 16:21:52.019707412 -0700 PDT Remote: 2024-08-05 16:21:51.960363 -0700 PDT m=+79.692294773 (delta=59.344412ms)
	I0805 16:21:52.018555    4640 fix.go:200] guest clock delta is within tolerance: 59.344412ms
	I0805 16:21:52.018561    4640 start.go:83] releasing machines lock for "multinode-985000-m02", held for 13.094193048s
	I0805 16:21:52.018577    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:52.018703    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:21:52.040117    4640 out.go:177] * Found network options:
	I0805 16:21:52.084887    4640 out.go:177]   - NO_PROXY=192.169.0.13
	W0805 16:21:52.106885    4640 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:21:52.106945    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:52.107811    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:52.108153    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:52.108320    4640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:21:52.108371    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	W0805 16:21:52.108412    4640 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:21:52.108519    4640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 16:21:52.108545    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:52.108628    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:52.108772    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:52.108842    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:52.108951    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:52.109026    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:52.109176    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:52.109197    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:21:52.109323    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:21:52.141829    4640 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0805 16:21:52.141939    4640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:21:52.141993    4640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:21:52.191903    4640 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0805 16:21:52.192466    4640 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0805 16:21:52.192507    4640 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:21:52.192514    4640 start.go:495] detecting cgroup driver to use...
	I0805 16:21:52.192581    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:21:52.208225    4640 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0805 16:21:52.208528    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:21:52.217078    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:21:52.225489    4640 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:21:52.225534    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:21:52.233992    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:21:52.242465    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:21:52.250835    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:21:52.260065    4640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:21:52.268863    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:21:52.277242    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:21:52.285501    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:21:52.293845    4640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:21:52.301185    4640 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0805 16:21:52.301319    4640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:21:52.308881    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:21:52.403323    4640 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:21:52.423722    4640 start.go:495] detecting cgroup driver to use...
	I0805 16:21:52.423794    4640 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:21:52.442557    4640 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0805 16:21:52.443108    4640 command_runner.go:130] > [Unit]
	I0805 16:21:52.443119    4640 command_runner.go:130] > Description=Docker Application Container Engine
	I0805 16:21:52.443124    4640 command_runner.go:130] > Documentation=https://docs.docker.com
	I0805 16:21:52.443128    4640 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0805 16:21:52.443132    4640 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0805 16:21:52.443136    4640 command_runner.go:130] > StartLimitBurst=3
	I0805 16:21:52.443141    4640 command_runner.go:130] > StartLimitIntervalSec=60
	I0805 16:21:52.443147    4640 command_runner.go:130] > [Service]
	I0805 16:21:52.443151    4640 command_runner.go:130] > Type=notify
	I0805 16:21:52.443155    4640 command_runner.go:130] > Restart=on-failure
	I0805 16:21:52.443160    4640 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13
	I0805 16:21:52.443165    4640 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0805 16:21:52.443175    4640 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0805 16:21:52.443182    4640 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0805 16:21:52.443188    4640 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0805 16:21:52.443194    4640 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0805 16:21:52.443200    4640 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0805 16:21:52.443212    4640 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0805 16:21:52.443224    4640 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0805 16:21:52.443231    4640 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0805 16:21:52.443234    4640 command_runner.go:130] > ExecStart=
	I0805 16:21:52.443246    4640 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0805 16:21:52.443250    4640 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0805 16:21:52.443256    4640 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0805 16:21:52.443262    4640 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0805 16:21:52.443265    4640 command_runner.go:130] > LimitNOFILE=infinity
	I0805 16:21:52.443269    4640 command_runner.go:130] > LimitNPROC=infinity
	I0805 16:21:52.443272    4640 command_runner.go:130] > LimitCORE=infinity
	I0805 16:21:52.443277    4640 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0805 16:21:52.443282    4640 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0805 16:21:52.443285    4640 command_runner.go:130] > TasksMax=infinity
	I0805 16:21:52.443290    4640 command_runner.go:130] > TimeoutStartSec=0
	I0805 16:21:52.443296    4640 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0805 16:21:52.443299    4640 command_runner.go:130] > Delegate=yes
	I0805 16:21:52.443304    4640 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0805 16:21:52.443313    4640 command_runner.go:130] > KillMode=process
	I0805 16:21:52.443317    4640 command_runner.go:130] > [Install]
	I0805 16:21:52.443321    4640 command_runner.go:130] > WantedBy=multi-user.target
	I0805 16:21:52.443454    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:21:52.455112    4640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:21:52.472976    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:21:52.485648    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:21:52.496640    4640 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:21:52.520742    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:21:52.532843    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:21:52.547391    4640 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0805 16:21:52.547619    4640 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:21:52.550475    4640 command_runner.go:130] > /usr/bin/cri-dockerd
	I0805 16:21:52.550551    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:21:52.558821    4640 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:21:52.572801    4640 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:21:52.669948    4640 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:21:52.772017    4640 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:21:52.772038    4640 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:21:52.785587    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:21:52.887001    4640 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:22:53.782764    4640 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0805 16:22:53.782779    4640 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0805 16:22:53.782788    4640 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.895755367s)
	I0805 16:22:53.782849    4640 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0805 16:22:53.791796    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0805 16:22:53.791808    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578059613Z" level=info msg="Starting up"
	I0805 16:22:53.791820    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578746899Z" level=info msg="containerd not running, starting managed containerd"
	I0805 16:22:53.791833    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.579364099Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	I0805 16:22:53.791843    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.597194743Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0805 16:22:53.791853    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613422882Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0805 16:22:53.791865    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613448264Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0805 16:22:53.791875    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613527396Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0805 16:22:53.791884    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613540484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791897    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613598776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.791906    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613664323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791924    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613844698Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.791936    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613881896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791948    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613894727Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.791957    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613902000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791967    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614005875Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791976    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614259691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791991    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615867073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.792000    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615974584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.792024    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616138996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.792033    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616172823Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0805 16:22:53.792042    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616291383Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0805 16:22:53.792050    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616398312Z" level=info msg="metadata content store policy set" policy=shared
	I0805 16:22:53.792059    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.618998610Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0805 16:22:53.792068    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619065338Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0805 16:22:53.792076    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619081703Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0805 16:22:53.792085    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619092273Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0805 16:22:53.792094    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619101426Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0805 16:22:53.792103    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619164798Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0805 16:22:53.792113    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619370752Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0805 16:22:53.792121    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619460644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0805 16:22:53.792129    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619495461Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0805 16:22:53.792138    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619506581Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0805 16:22:53.792148    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619515758Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792158    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619524383Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792170    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619532546Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792178    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619541391Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792187    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619550990Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792197    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619565508Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792266    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619576616Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792278    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619584035Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792291    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619598072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792299    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619608190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792307    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619616319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792316    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619625389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792326    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619634123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792335    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619648148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792344    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619658942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792353    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619667668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792362    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619676302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792371    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619686416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792380    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619694011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792388    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619701566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792397    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619709342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792406    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619719250Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0805 16:22:53.792415    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619733203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792423    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619741785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792432    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619749153Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0805 16:22:53.792442    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619797467Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0805 16:22:53.792454    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619811479Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0805 16:22:53.792467    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619819137Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0805 16:22:53.792661    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619826861Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0805 16:22:53.792673    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619833500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792682    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619841896Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0805 16:22:53.792690    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619852419Z" level=info msg="NRI interface is disabled by configuration."
	I0805 16:22:53.792702    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620071162Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0805 16:22:53.792710    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620124755Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0805 16:22:53.792718    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620155079Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0805 16:22:53.792725    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620168148Z" level=info msg="containerd successfully booted in 0.023750s"
	I0805 16:22:53.792734    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.639692405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0805 16:22:53.792741    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.644102102Z" level=info msg="Loading containers: start."
	I0805 16:22:53.792763    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.740540264Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0805 16:22:53.792774    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.826229634Z" level=info msg="Loading containers: done."
	I0805 16:22:53.792783    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843276878Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	I0805 16:22:53.792792    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843375843Z" level=info msg="Daemon has completed initialization"
	I0805 16:22:53.792800    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869275976Z" level=info msg="API listen on /var/run/docker.sock"
	I0805 16:22:53.792807    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869434474Z" level=info msg="API listen on [::]:2376"
	I0805 16:22:53.792813    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 systemd[1]: Started Docker Application Container Engine.
	I0805 16:22:53.792821    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.919662359Z" level=info msg="Processing signal 'terminated'"
	I0805 16:22:53.792829    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920773928Z" level=info msg="Daemon shutdown complete"
	I0805 16:22:53.792840    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920792538Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0805 16:22:53.792852    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920845272Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0805 16:22:53.792861    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920858866Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0805 16:22:53.792868    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0805 16:22:53.792874    4640 command_runner.go:130] > Aug 05 23:21:53 multinode-985000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0805 16:22:53.792904    4640 command_runner.go:130] > Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0805 16:22:53.792911    4640 command_runner.go:130] > Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0805 16:22:53.792918    4640 command_runner.go:130] > Aug 05 23:21:53 multinode-985000-m02 dockerd[923]: time="2024-08-05T23:21:53.957339969Z" level=info msg="Starting up"
	I0805 16:22:53.792929    4640 command_runner.go:130] > Aug 05 23:22:53 multinode-985000-m02 dockerd[923]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0805 16:22:53.792940    4640 command_runner.go:130] > Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0805 16:22:53.792946    4640 command_runner.go:130] > Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0805 16:22:53.792952    4640 command_runner.go:130] > Aug 05 23:22:53 multinode-985000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0805 16:22:53.817223    4640 out.go:177] 
	W0805 16:22:53.838182    4640 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 05 23:21:50 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578059613Z" level=info msg="Starting up"
	Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578746899Z" level=info msg="containerd not running, starting managed containerd"
	Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.579364099Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.597194743Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613422882Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613448264Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613527396Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613540484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613598776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613664323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613844698Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613881896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613894727Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613902000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614005875Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614259691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615867073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615974584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616138996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616172823Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616291383Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616398312Z" level=info msg="metadata content store policy set" policy=shared
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.618998610Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619065338Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619081703Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619092273Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619101426Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619164798Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619370752Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619460644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619495461Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619506581Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619515758Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619524383Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619532546Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619541391Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619550990Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619565508Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619576616Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619584035Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619598072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619608190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619616319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619625389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619634123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619648148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619658942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619667668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619676302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619686416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619694011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619701566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619709342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619719250Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619733203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619741785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619749153Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619797467Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619811479Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619819137Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619826861Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619833500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619841896Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619852419Z" level=info msg="NRI interface is disabled by configuration."
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620071162Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620124755Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620155079Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620168148Z" level=info msg="containerd successfully booted in 0.023750s"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.639692405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.644102102Z" level=info msg="Loading containers: start."
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.740540264Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.826229634Z" level=info msg="Loading containers: done."
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843276878Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843375843Z" level=info msg="Daemon has completed initialization"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869275976Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869434474Z" level=info msg="API listen on [::]:2376"
	Aug 05 23:21:51 multinode-985000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.919662359Z" level=info msg="Processing signal 'terminated'"
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920773928Z" level=info msg="Daemon shutdown complete"
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920792538Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920845272Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920858866Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 05 23:21:52 multinode-985000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 05 23:21:53 multinode-985000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:21:53 multinode-985000-m02 dockerd[923]: time="2024-08-05T23:21:53.957339969Z" level=info msg="Starting up"
	Aug 05 23:22:53 multinode-985000-m02 dockerd[923]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 05 23:22:53 multinode-985000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0805 16:22:53.838301    4640 out.go:239] * 
	W0805 16:22:53.839537    4640 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:22:53.901092    4640 out.go:177] 
	
	
	==> Docker <==
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.538240622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.545949341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.546006859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.546094356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.546213245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 cri-dockerd[1167]: time="2024-08-05T23:21:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2a8cd74365e92f179bb6ee1ce28c9364c192d2bf64c54e8b18c5339cfbdf5dcd/resolv.conf as [nameserver 192.169.0.1]"
	Aug 05 23:21:36 multinode-985000 cri-dockerd[1167]: time="2024-08-05T23:21:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/35b9ac42edc06af57c697463456d60a00f8d9d12849ef967af1e639bc238e3b3/resolv.conf as [nameserver 192.169.0.1]"
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.715025205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.715620680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.716022138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.717088853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.755323726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.755409641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.755418837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.764703174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:22:57 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:57.493861515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:22:57 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:57.493963422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:22:57 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:57.494329548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:22:57 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:57.494770138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:22:57 multinode-985000 cri-dockerd[1167]: time="2024-08-05T23:22:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/abfb33d4f204dd0b2a7ffc533336cce5539144674b64125ac7373b0be8961559/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 05 23:22:58 multinode-985000 cri-dockerd[1167]: time="2024-08-05T23:22:58Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Aug 05 23:22:58 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:58.841390849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:22:58 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:58.841491056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:22:58 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:58.841532145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:22:58 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:58.841640743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0cbc162071e51       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   11 minutes ago      Running             busybox                   0                   abfb33d4f204d       busybox-fc5497c4f-44k5g
	c9365aec33892       cbb01a7bd410d                                                                                         12 minutes ago      Running             coredns                   0                   35b9ac42edc06       coredns-7db6d8ff4d-fqtll
	3d9fd612d0b14       6e38f40d628db                                                                                         12 minutes ago      Running             storage-provisioner       0                   2a8cd74365e92       storage-provisioner
	724e5cfab0a27       kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3              13 minutes ago      Running             kindnet-cni               0                   65a1122097f07       kindnet-tvtvg
	d58ca48f9f8b2       55bb025d2cfa5                                                                                         13 minutes ago      Running             kube-proxy                0                   c91338eb0e138       kube-proxy-fwgw7
	792feba1a6f6b       3edc18e7b7672                                                                                         13 minutes ago      Running             kube-scheduler            0                   c86e04eb7823b       kube-scheduler-multinode-985000
	1fdd85b796ab3       3861cfcd7c04c                                                                                         13 minutes ago      Running             etcd                      0                   b58900db52990       etcd-multinode-985000
	d11865076c645       76932a3b37d7e                                                                                         13 minutes ago      Running             kube-controller-manager   0                   55a20063845e3       kube-controller-manager-multinode-985000
	608878b33f358       1f6d574d502f3                                                                                         13 minutes ago      Running             kube-apiserver            0                   569788c2699f1       kube-apiserver-multinode-985000
	
	
	==> coredns [c9365aec3389] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57821 - 19682 "HINFO IN 7732396596932693360.4385804994640298901. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.014623104s
	[INFO] 10.244.0.3:44234 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136193s
	[INFO] 10.244.0.3:37423 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.058799401s
	[INFO] 10.244.0.3:57961 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.010090318s
	[INFO] 10.244.0.3:37799 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.012765436s
	[INFO] 10.244.0.3:46499 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000078364s
	[INFO] 10.244.0.3:42436 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011216992s
	[INFO] 10.244.0.3:35880 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000144767s
	[INFO] 10.244.0.3:39224 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104006s
	[INFO] 10.244.0.3:48536 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.013324615s
	[INFO] 10.244.0.3:55841 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000221823s
	[INFO] 10.244.0.3:46712 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000111417s
	[INFO] 10.244.0.3:51982 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099744s
	[INFO] 10.244.0.3:55425 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080184s
	[INFO] 10.244.0.3:58084 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119904s
	[INFO] 10.244.0.3:57892 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049065s
	[INFO] 10.244.0.3:52329 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000049128s
	
	
	==> describe nodes <==
	Name:               multinode-985000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-985000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=multinode-985000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T16_21_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:21:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-985000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:34:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:33:23 +0000   Mon, 05 Aug 2024 23:21:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:33:23 +0000   Mon, 05 Aug 2024 23:21:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:33:23 +0000   Mon, 05 Aug 2024 23:21:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:33:23 +0000   Mon, 05 Aug 2024 23:21:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.13
	  Hostname:    multinode-985000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 43d0d80c8ac846e58ac4351481e2a76f
	  System UUID:                3ac6443b-0000-0000-898d-9b152fa64288
	  Boot ID:                    382df761-aca3-4a9d-bdce-655bf0444398
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-44k5g                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-fqtll                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-985000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-tvtvg                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-985000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-multinode-985000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-fwgw7                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-985000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node multinode-985000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node multinode-985000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node multinode-985000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node multinode-985000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node multinode-985000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node multinode-985000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node multinode-985000 event: Registered Node multinode-985000 in Controller
	  Normal  NodeReady                12m                kubelet          Node multinode-985000 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.261909] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.788416] systemd-fstab-generator[490]: Ignoring "noauto" option for root device
	[  +0.099076] systemd-fstab-generator[502]: Ignoring "noauto" option for root device
	[  +1.730104] systemd-fstab-generator[841]: Ignoring "noauto" option for root device
	[  +0.293514] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.050985] kauditd_printk_skb: 95 callbacks suppressed
	[  +0.056812] systemd-fstab-generator[892]: Ignoring "noauto" option for root device
	[  +0.126132] systemd-fstab-generator[906]: Ignoring "noauto" option for root device
	[  +2.458612] systemd-fstab-generator[1120]: Ignoring "noauto" option for root device
	[  +0.104830] systemd-fstab-generator[1132]: Ignoring "noauto" option for root device
	[  +0.110549] systemd-fstab-generator[1144]: Ignoring "noauto" option for root device
	[  +0.128910] systemd-fstab-generator[1159]: Ignoring "noauto" option for root device
	[  +3.841948] systemd-fstab-generator[1259]: Ignoring "noauto" option for root device
	[  +0.049995] kauditd_printk_skb: 180 callbacks suppressed
	[  +2.575866] systemd-fstab-generator[1508]: Ignoring "noauto" option for root device
	[  +3.513702] systemd-fstab-generator[1689]: Ignoring "noauto" option for root device
	[  +0.052965] kauditd_printk_skb: 70 callbacks suppressed
	[Aug 5 23:21] systemd-fstab-generator[2095]: Ignoring "noauto" option for root device
	[  +0.093506] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.997559] systemd-fstab-generator[2287]: Ignoring "noauto" option for root device
	[  +0.103967] kauditd_printk_skb: 12 callbacks suppressed
	[ +16.210215] kauditd_printk_skb: 60 callbacks suppressed
	[Aug 5 23:22] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [1fdd85b796ab] <==
	{"level":"info","ts":"2024-08-05T23:21:02.190598Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T23:21:02.190621Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T23:21:02.179152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 switched to configuration voters=(16152458731666035825)"}
	{"level":"info","ts":"2024-08-05T23:21:02.190761Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","added-peer-id":"e0290fa3161c5471","added-peer-peer-urls":["https://192.169.0.13:2380"]}
	{"level":"info","ts":"2024-08-05T23:21:02.845352Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-05T23:21:02.84543Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-05T23:21:02.845462Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgPreVoteResp from e0290fa3161c5471 at term 1"}
	{"level":"info","ts":"2024-08-05T23:21:02.845512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became candidate at term 2"}
	{"level":"info","ts":"2024-08-05T23:21:02.845532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgVoteResp from e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-08-05T23:21:02.845548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became leader at term 2"}
	{"level":"info","ts":"2024-08-05T23:21:02.845562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e0290fa3161c5471 elected leader e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-08-05T23:21:02.849595Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.851787Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e0290fa3161c5471","local-member-attributes":"{Name:multinode-985000 ClientURLs:[https://192.169.0.13:2379]}","request-path":"/0/members/e0290fa3161c5471/attributes","cluster-id":"87b46e718846f146","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T23:21:02.852037Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:21:02.855611Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	{"level":"info","ts":"2024-08-05T23:21:02.856003Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.856059Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.85615Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.863221Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:21:02.86336Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T23:21:02.863406Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T23:21:02.864495Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-05T23:31:02.914901Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":684}
	{"level":"info","ts":"2024-08-05T23:31:02.918154Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":684,"took":"2.558785ms","hash":2682644219,"current-db-size-bytes":2088960,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2088960,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-08-05T23:31:02.918199Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2682644219,"revision":684,"compact-revision":-1}
	
	
	==> kernel <==
	 23:34:24 up 13 min,  0 users,  load average: 0.22, 0.12, 0.09
	Linux multinode-985000 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [724e5cfab0a2] <==
	I0805 23:32:14.997932       1 main.go:299] handling current node
	I0805 23:32:24.989692       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:32:24.989734       1 main.go:299] handling current node
	I0805 23:32:34.989491       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:32:34.989526       1 main.go:299] handling current node
	I0805 23:32:44.994445       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:32:44.994498       1 main.go:299] handling current node
	I0805 23:32:54.996022       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:32:54.996137       1 main.go:299] handling current node
	I0805 23:33:04.994884       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:33:04.994996       1 main.go:299] handling current node
	I0805 23:33:14.989401       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:33:14.989421       1 main.go:299] handling current node
	I0805 23:33:24.989307       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:33:24.989722       1 main.go:299] handling current node
	I0805 23:33:34.988932       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:33:34.989066       1 main.go:299] handling current node
	I0805 23:33:44.994912       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:33:44.995362       1 main.go:299] handling current node
	I0805 23:33:54.988562       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:33:54.988724       1 main.go:299] handling current node
	I0805 23:34:04.990678       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:34:04.991047       1 main.go:299] handling current node
	I0805 23:34:14.989462       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:34:14.989592       1 main.go:299] handling current node
	
	
	==> kube-apiserver [608878b33f35] <==
	I0805 23:21:04.064440       1 shared_informer.go:320] Caches are synced for configmaps
	I0805 23:21:04.096991       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0805 23:21:04.097032       1 aggregator.go:165] initial CRD sync complete...
	I0805 23:21:04.097038       1 autoregister_controller.go:141] Starting autoregister controller
	I0805 23:21:04.097041       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0805 23:21:04.097046       1 cache.go:39] Caches are synced for autoregister controller
	I0805 23:21:04.110976       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0805 23:21:04.964782       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0805 23:21:04.969492       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0805 23:21:04.969592       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0805 23:21:05.293407       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0805 23:21:05.318630       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0805 23:21:05.372930       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0805 23:21:05.377089       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.13]
	I0805 23:21:05.377814       1 controller.go:615] quota admission added evaluator for: endpoints
	I0805 23:21:05.381896       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0805 23:21:06.014220       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0805 23:21:06.529594       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0805 23:21:06.534785       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0805 23:21:06.541889       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0805 23:21:20.069451       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0805 23:21:20.168118       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0805 23:34:22.712021       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52583: use of closed network connection
	E0805 23:34:23.040370       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52588: use of closed network connection
	E0805 23:34:23.352264       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52593: use of closed network connection
	
	
	==> kube-controller-manager [d11865076c64] <==
	I0805 23:21:19.437276       1 shared_informer.go:320] Caches are synced for HPA
	I0805 23:21:19.471485       1 shared_informer.go:320] Caches are synced for resource quota
	I0805 23:21:19.493007       1 shared_informer.go:320] Caches are synced for resource quota
	I0805 23:21:19.891021       1 shared_informer.go:320] Caches are synced for garbage collector
	I0805 23:21:19.917468       1 shared_informer.go:320] Caches are synced for garbage collector
	I0805 23:21:19.917792       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0805 23:21:20.414332       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="341.696199ms"
	I0805 23:21:20.435171       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="20.789887ms"
	I0805 23:21:20.453666       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.448745ms"
	I0805 23:21:20.454853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="1.144243ms"
	I0805 23:21:20.787054       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="47.481389ms"
	I0805 23:21:20.817469       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.368774ms"
	I0805 23:21:20.817550       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="43.975µs"
	I0805 23:21:35.878200       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="31.077µs"
	I0805 23:21:35.888778       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.967µs"
	I0805 23:21:37.680305       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="64.353µs"
	I0805 23:21:37.699191       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="7.51419ms"
	I0805 23:21:37.699276       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.856µs"
	I0805 23:21:39.419986       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0805 23:22:57.139604       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.652844ms"
	I0805 23:22:57.152479       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.645403ms"
	I0805 23:22:57.161837       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.312944ms"
	I0805 23:22:57.161913       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.986µs"
	I0805 23:22:59.131878       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.268042ms"
	I0805 23:22:59.132399       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.529µs"
	
	
	==> kube-proxy [d58ca48f9f8b] <==
	I0805 23:21:21.029929       1 server_linux.go:69] "Using iptables proxy"
	I0805 23:21:21.072929       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.13"]
	I0805 23:21:21.105532       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 23:21:21.105552       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 23:21:21.105563       1 server_linux.go:165] "Using iptables Proxier"
	I0805 23:21:21.107493       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 23:21:21.107594       1 server.go:872] "Version info" version="v1.30.3"
	I0805 23:21:21.107602       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:21:21.108477       1 config.go:192] "Starting service config controller"
	I0805 23:21:21.108482       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 23:21:21.108492       1 config.go:101] "Starting endpoint slice config controller"
	I0805 23:21:21.108494       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 23:21:21.108784       1 config.go:319] "Starting node config controller"
	I0805 23:21:21.108789       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 23:21:21.209420       1 shared_informer.go:320] Caches are synced for node config
	I0805 23:21:21.209474       1 shared_informer.go:320] Caches are synced for service config
	I0805 23:21:21.209501       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [792feba1a6f6] <==
	E0805 23:21:04.024310       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:04.024229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0805 23:21:04.024017       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 23:21:04.024329       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 23:21:04.024047       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 23:21:04.024362       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0805 23:21:04.024118       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 23:21:04.024431       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0805 23:21:04.860871       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:04.861069       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0805 23:21:04.959895       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0805 23:21:04.959949       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0805 23:21:04.962444       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 23:21:04.962496       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0805 23:21:04.968410       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 23:21:04.968452       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 23:21:05.030527       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0805 23:21:05.030566       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 23:21:05.076451       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:05.076659       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0805 23:21:05.118159       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:05.118676       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0805 23:21:05.141945       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 23:21:05.142020       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0805 23:21:08.218627       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 05 23:30:06 multinode-985000 kubelet[2102]: E0805 23:30:06.388840    2102 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:30:06 multinode-985000 kubelet[2102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:30:06 multinode-985000 kubelet[2102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:30:06 multinode-985000 kubelet[2102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:30:06 multinode-985000 kubelet[2102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:31:06 multinode-985000 kubelet[2102]: E0805 23:31:06.388949    2102 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:31:06 multinode-985000 kubelet[2102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:31:06 multinode-985000 kubelet[2102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:31:06 multinode-985000 kubelet[2102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:31:06 multinode-985000 kubelet[2102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:32:06 multinode-985000 kubelet[2102]: E0805 23:32:06.388091    2102 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:32:06 multinode-985000 kubelet[2102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:32:06 multinode-985000 kubelet[2102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:32:06 multinode-985000 kubelet[2102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:32:06 multinode-985000 kubelet[2102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:33:06 multinode-985000 kubelet[2102]: E0805 23:33:06.388876    2102 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:33:06 multinode-985000 kubelet[2102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:33:06 multinode-985000 kubelet[2102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:33:06 multinode-985000 kubelet[2102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:33:06 multinode-985000 kubelet[2102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:34:06 multinode-985000 kubelet[2102]: E0805 23:34:06.388016    2102 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:34:06 multinode-985000 kubelet[2102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:34:06 multinode-985000 kubelet[2102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:34:06 multinode-985000 kubelet[2102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:34:06 multinode-985000 kubelet[2102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [3d9fd612d0b1] <==
	I0805 23:21:36.824264       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0805 23:21:36.839328       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0805 23:21:36.841986       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0805 23:21:36.851899       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0805 23:21:36.852326       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-985000_20a8683f-3aa0-4f0f-a016-73ecb7148b29!
	I0805 23:21:36.851925       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1cf31f72-12b6-4b0c-b90e-6ea19cb3d50f", APIVersion:"v1", ResourceVersion:"436", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-985000_20a8683f-3aa0-4f0f-a016-73ecb7148b29 became leader
	I0805 23:21:36.952695       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-985000_20a8683f-3aa0-4f0f-a016-73ecb7148b29!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-985000 -n multinode-985000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-985000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-ptd5b
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/DeployApp2Nodes]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-985000 describe pod busybox-fc5497c4f-ptd5b
helpers_test.go:282: (dbg) kubectl --context multinode-985000 describe pod busybox-fc5497c4f-ptd5b:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-ptd5b
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x2xz9 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-x2xz9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  77s (x3 over 11m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (689.19s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- exec busybox-fc5497c4f-44k5g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- exec busybox-fc5497c4f-44k5g -- sh -c "ping -c 1 192.169.0.1"
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-985000 -- exec busybox-fc5497c4f-ptd5b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:572: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-985000 -- exec busybox-fc5497c4f-ptd5b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (119.0907ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-ptd5b does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:574: Pod busybox-fc5497c4f-ptd5b could not resolve 'host.minikube.internal': exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-985000 logs -n 25: (1.990749894s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p mount-start-1-684000                           | mount-start-1-684000 | jenkins | v1.33.1 | 05 Aug 24 16:20 PDT | 05 Aug 24 16:20 PDT |
	| start   | -p multinode-985000                               | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:20 PDT |                     |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=hyperkit                                 |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- apply -f                   | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:22 PDT | 05 Aug 24 16:22 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- rollout                    | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:22 PDT |                     |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:32 PDT | 05 Aug 24 16:32 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT |                     |
	|         | busybox-fc5497c4f-ptd5b --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT |                     |
	|         | busybox-fc5497c4f-ptd5b --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT |                     |
	|         | busybox-fc5497c4f-ptd5b -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000     | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT |                     |
	|         | busybox-fc5497c4f-ptd5b                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 16:20:32
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 16:20:32.303800    4640 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:20:32.303980    4640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:20:32.303986    4640 out.go:304] Setting ErrFile to fd 2...
	I0805 16:20:32.303990    4640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:20:32.304163    4640 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:20:32.305609    4640 out.go:298] Setting JSON to false
	I0805 16:20:32.329307    4640 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3003,"bootTime":1722897029,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0805 16:20:32.329400    4640 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:20:32.351877    4640 out.go:177] * [multinode-985000] minikube v1.33.1 on Darwin 14.5
	I0805 16:20:32.392940    4640 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:20:32.393020    4640 notify.go:220] Checking for updates...
	I0805 16:20:32.435775    4640 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:20:32.456783    4640 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0805 16:20:32.477872    4640 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:20:32.499010    4640 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:20:32.519936    4640 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:20:32.541363    4640 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:20:32.571784    4640 out.go:177] * Using the hyperkit driver based on user configuration
	I0805 16:20:32.613992    4640 start.go:297] selected driver: hyperkit
	I0805 16:20:32.614020    4640 start.go:901] validating driver "hyperkit" against <nil>
	I0805 16:20:32.614042    4640 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:20:32.618322    4640 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:20:32.618456    4640 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19373-1122/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0805 16:20:32.627075    4640 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0805 16:20:32.631391    4640 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:20:32.631417    4640 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0805 16:20:32.631452    4640 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 16:20:32.631678    4640 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:20:32.631709    4640 cni.go:84] Creating CNI manager for ""
	I0805 16:20:32.631719    4640 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0805 16:20:32.631730    4640 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0805 16:20:32.631823    4640 start.go:340] cluster config:
	{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:20:32.631925    4640 iso.go:125] acquiring lock: {Name:mk71e8d40232ece83c91dc82184f03ab93aee56e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:20:32.673756    4640 out.go:177] * Starting "multinode-985000" primary control-plane node in "multinode-985000" cluster
	I0805 16:20:32.695001    4640 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:20:32.695088    4640 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0805 16:20:32.695107    4640 cache.go:56] Caching tarball of preloaded images
	I0805 16:20:32.695319    4640 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:20:32.695338    4640 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:20:32.695809    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:20:32.695848    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json: {Name:mk470c2e849a0c86ee251e86e74d9f6dfdb47dad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:32.696485    4640 start.go:360] acquireMachinesLock for multinode-985000: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:20:32.696593    4640 start.go:364] duration metric: took 88.666µs to acquireMachinesLock for "multinode-985000"
	I0805 16:20:32.696646    4640 start.go:93] Provisioning new machine with config: &{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:20:32.696745    4640 start.go:125] createHost starting for "" (driver="hyperkit")
	I0805 16:20:32.718059    4640 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 16:20:32.718351    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:20:32.718416    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:20:32.728195    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52477
	I0805 16:20:32.728547    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:20:32.728938    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:20:32.728948    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:20:32.729147    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:20:32.729251    4640 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:20:32.729369    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:32.729498    4640 start.go:159] libmachine.API.Create for "multinode-985000" (driver="hyperkit")
	I0805 16:20:32.729521    4640 client.go:168] LocalClient.Create starting
	I0805 16:20:32.729556    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem
	I0805 16:20:32.729608    4640 main.go:141] libmachine: Decoding PEM data...
	I0805 16:20:32.729625    4640 main.go:141] libmachine: Parsing certificate...
	I0805 16:20:32.729685    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem
	I0805 16:20:32.729724    4640 main.go:141] libmachine: Decoding PEM data...
	I0805 16:20:32.729737    4640 main.go:141] libmachine: Parsing certificate...
	I0805 16:20:32.729749    4640 main.go:141] libmachine: Running pre-create checks...
	I0805 16:20:32.729760    4640 main.go:141] libmachine: (multinode-985000) Calling .PreCreateCheck
	I0805 16:20:32.729840    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:32.729974    4640 main.go:141] libmachine: (multinode-985000) Calling .GetConfigRaw
	I0805 16:20:32.739224    4640 main.go:141] libmachine: Creating machine...
	I0805 16:20:32.739247    4640 main.go:141] libmachine: (multinode-985000) Calling .Create
	I0805 16:20:32.739475    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:32.739754    4640 main.go:141] libmachine: (multinode-985000) DBG | I0805 16:20:32.739457    4648 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:20:32.739852    4640 main.go:141] libmachine: (multinode-985000) Downloading /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1122/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 16:20:32.920622    4640 main.go:141] libmachine: (multinode-985000) DBG | I0805 16:20:32.920524    4648 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa...
	I0805 16:20:32.957084    4640 main.go:141] libmachine: (multinode-985000) DBG | I0805 16:20:32.957005    4648 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/multinode-985000.rawdisk...
	I0805 16:20:32.957123    4640 main.go:141] libmachine: (multinode-985000) DBG | Writing magic tar header
	I0805 16:20:32.957134    4640 main.go:141] libmachine: (multinode-985000) DBG | Writing SSH key tar header
	I0805 16:20:32.957531    4640 main.go:141] libmachine: (multinode-985000) DBG | I0805 16:20:32.957490    4648 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000 ...
	I0805 16:20:33.331110    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:33.331140    4640 main.go:141] libmachine: (multinode-985000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid
	I0805 16:20:33.331159    4640 main.go:141] libmachine: (multinode-985000) DBG | Using UUID 3ac698fc-f622-443b-898d-9b152fa64288
	I0805 16:20:33.442582    4640 main.go:141] libmachine: (multinode-985000) DBG | Generated MAC e2:6:14:d2:13:ae
	I0805 16:20:33.442603    4640 main.go:141] libmachine: (multinode-985000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000
	I0805 16:20:33.442636    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3ac698fc-f622-443b-898d-9b152fa64288", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0805 16:20:33.442669    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3ac698fc-f622-443b-898d-9b152fa64288", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0805 16:20:33.442719    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "3ac698fc-f622-443b-898d-9b152fa64288", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/multinode-985000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage,/Users/jenkins/minikube-integration/1937
3-1122/.minikube/machines/multinode-985000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"}
	I0805 16:20:33.442758    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 3ac698fc-f622-443b-898d-9b152fa64288 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/multinode-985000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"
	I0805 16:20:33.442774    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:20:33.445733    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: Pid is 4651
	I0805 16:20:33.446145    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 0
	I0805 16:20:33.446167    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:33.446227    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:33.447073    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:33.447135    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:33.447152    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:33.447186    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:33.447202    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:33.447214    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:33.447222    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:33.447229    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:33.447247    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:33.447269    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:33.447287    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:33.447304    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:33.447321    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:33.453446    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:20:33.506623    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:20:33.507268    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:20:33.507283    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:20:33.507290    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:20:33.507298    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:20:33.891346    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:20:33.891387    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:20:34.006163    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:20:34.006177    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:20:34.006189    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:20:34.006208    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:20:34.007050    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:20:34.007082    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:20:35.448624    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 1
	I0805 16:20:35.448640    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:35.448724    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:35.449516    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:35.449591    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:35.449607    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:35.449619    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:35.449625    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:35.449648    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:35.449664    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:35.449695    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:35.449711    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:35.449719    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:35.449725    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:35.449731    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:35.449738    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:37.449834    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 2
	I0805 16:20:37.449851    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:37.449867    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:37.450676    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:37.450690    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:37.450697    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:37.450707    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:37.450722    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:37.450733    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:37.450744    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:37.450754    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:37.450771    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:37.450784    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:37.450797    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:37.450809    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:37.450819    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:39.451161    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 3
	I0805 16:20:39.451179    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:39.451277    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:39.452025    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:39.452066    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:39.452089    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:39.452104    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:39.452124    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:39.452141    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:39.452154    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:39.452161    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:39.452167    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:39.452183    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:39.452195    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:39.452202    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:39.452211    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:39.592041    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:39 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:20:39.592070    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:39 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:20:39.592076    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:39 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:20:39.615760    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:39 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:20:41.452210    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 4
	I0805 16:20:41.452225    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:41.452325    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:41.453101    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:41.453153    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:41.453162    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:41.453169    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:41.453178    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:41.453187    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:41.453194    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:41.453200    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:41.453219    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:41.453231    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:41.453241    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:41.453250    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:41.453258    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:43.455148    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 5
	I0805 16:20:43.455166    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:43.455244    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:43.456059    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:43.456103    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:20:43.456115    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:20:43.456122    4640 main.go:141] libmachine: (multinode-985000) DBG | Found match: e2:6:14:d2:13:ae
	I0805 16:20:43.456127    4640 main.go:141] libmachine: (multinode-985000) DBG | IP: 192.169.0.13
	I0805 16:20:43.456181    4640 main.go:141] libmachine: (multinode-985000) Calling .GetConfigRaw
	I0805 16:20:43.456781    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:43.456879    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:43.456972    4640 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 16:20:43.456985    4640 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:20:43.457082    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:43.457144    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:43.457907    4640 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 16:20:43.457917    4640 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 16:20:43.457923    4640 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 16:20:43.457927    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:43.458023    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:43.458126    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:43.458255    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:43.458346    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:43.458472    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:43.458676    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:43.458683    4640 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 16:20:44.513424    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:20:44.513443    4640 main.go:141] libmachine: Detecting the provisioner...
	I0805 16:20:44.513452    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:44.513594    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:44.513694    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.513791    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.513876    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:44.513996    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:44.514158    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:44.514165    4640 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 16:20:44.573082    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 16:20:44.573142    4640 main.go:141] libmachine: found compatible host: buildroot
	I0805 16:20:44.573149    4640 main.go:141] libmachine: Provisioning with buildroot...
	I0805 16:20:44.573155    4640 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:20:44.573299    4640 buildroot.go:166] provisioning hostname "multinode-985000"
	I0805 16:20:44.573311    4640 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:20:44.573416    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:44.573499    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:44.573585    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.573680    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.573795    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:44.573922    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:44.574068    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:44.574076    4640 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-985000 && echo "multinode-985000" | sudo tee /etc/hostname
	I0805 16:20:44.637872    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-985000
	
	I0805 16:20:44.637892    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:44.638029    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:44.638132    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.638218    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.638297    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:44.638429    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:44.638562    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:44.638582    4640 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-985000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-985000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-985000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:20:44.698340    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:20:44.698360    4640 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:20:44.698377    4640 buildroot.go:174] setting up certificates
	I0805 16:20:44.698389    4640 provision.go:84] configureAuth start
	I0805 16:20:44.698397    4640 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:20:44.698544    4640 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:20:44.698658    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:44.698750    4640 provision.go:143] copyHostCerts
	I0805 16:20:44.698781    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:20:44.698850    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:20:44.698858    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:20:44.699001    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:20:44.699205    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:20:44.699246    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:20:44.699250    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:20:44.699341    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:20:44.699482    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:20:44.699528    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:20:44.699533    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:20:44.699615    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:20:44.699756    4640 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.multinode-985000 san=[127.0.0.1 192.169.0.13 localhost minikube multinode-985000]
	I0805 16:20:45.028860    4640 provision.go:177] copyRemoteCerts
	I0805 16:20:45.028920    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:20:45.028938    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:45.029080    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:45.029180    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.029338    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:45.029452    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:20:45.063652    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:20:45.063724    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:20:45.083743    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:20:45.083800    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0805 16:20:45.103791    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:20:45.103863    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:20:45.123716    4640 provision.go:87] duration metric: took 425.312704ms to configureAuth
	I0805 16:20:45.123731    4640 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:20:45.123881    4640 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:20:45.123894    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:45.124028    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:45.124115    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:45.124206    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.124285    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.124381    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:45.124503    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:45.124632    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:45.124639    4640 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:20:45.176256    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:20:45.176269    4640 buildroot.go:70] root file system type: tmpfs
	I0805 16:20:45.176337    4640 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:20:45.176350    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:45.176482    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:45.176580    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.176695    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.176782    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:45.176911    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:45.177045    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:45.177090    4640 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:20:45.240992    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:20:45.241023    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:45.241166    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:45.241270    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.241382    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.241469    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:45.241590    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:45.241743    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:45.241755    4640 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:20:46.765402    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:20:46.765418    4640 main.go:141] libmachine: Checking connection to Docker...
	I0805 16:20:46.765424    4640 main.go:141] libmachine: (multinode-985000) Calling .GetURL
	I0805 16:20:46.765563    4640 main.go:141] libmachine: Docker is up and running!
	I0805 16:20:46.765570    4640 main.go:141] libmachine: Reticulating splines...
	I0805 16:20:46.765575    4640 client.go:171] duration metric: took 14.036043683s to LocalClient.Create
	I0805 16:20:46.765592    4640 start.go:167] duration metric: took 14.036090848s to libmachine.API.Create "multinode-985000"
	I0805 16:20:46.765602    4640 start.go:293] postStartSetup for "multinode-985000" (driver="hyperkit")
	I0805 16:20:46.765609    4640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:20:46.765620    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.765765    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:20:46.765778    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:46.765878    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:46.765972    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.766070    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:46.766168    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:20:46.808597    4640 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:20:46.814840    4640 command_runner.go:130] > NAME=Buildroot
	I0805 16:20:46.814852    4640 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0805 16:20:46.814856    4640 command_runner.go:130] > ID=buildroot
	I0805 16:20:46.814869    4640 command_runner.go:130] > VERSION_ID=2023.02.9
	I0805 16:20:46.814873    4640 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0805 16:20:46.814969    4640 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:20:46.814985    4640 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:20:46.815099    4640 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:20:46.815290    4640 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:20:46.815297    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:20:46.815526    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:20:46.832473    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:20:46.852626    4640 start.go:296] duration metric: took 87.015317ms for postStartSetup
	I0805 16:20:46.852653    4640 main.go:141] libmachine: (multinode-985000) Calling .GetConfigRaw
	I0805 16:20:46.853264    4640 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:20:46.853417    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:20:46.853762    4640 start.go:128] duration metric: took 14.156998155s to createHost
	I0805 16:20:46.853776    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:46.853870    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:46.853964    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.854078    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.854160    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:46.854284    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:46.854405    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:46.854413    4640 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 16:20:46.906137    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722900047.071906799
	
	I0805 16:20:46.906149    4640 fix.go:216] guest clock: 1722900047.071906799
	I0805 16:20:46.906154    4640 fix.go:229] Guest: 2024-08-05 16:20:47.071906799 -0700 PDT Remote: 2024-08-05 16:20:46.85377 -0700 PDT m=+14.585721958 (delta=218.136799ms)
	I0805 16:20:46.906178    4640 fix.go:200] guest clock delta is within tolerance: 218.136799ms
	I0805 16:20:46.906182    4640 start.go:83] releasing machines lock for "multinode-985000", held for 14.209573761s
	I0805 16:20:46.906200    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.906321    4640 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:20:46.906429    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.906734    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.906832    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.906917    4640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:20:46.906947    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:46.906977    4640 ssh_runner.go:195] Run: cat /version.json
	I0805 16:20:46.906987    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:46.907036    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:46.907080    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:46.907105    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.907167    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.907190    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:46.907251    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:46.907285    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:20:46.907353    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:20:46.936969    4640 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0805 16:20:46.937263    4640 ssh_runner.go:195] Run: systemctl --version
	I0805 16:20:46.992747    4640 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0805 16:20:46.993626    4640 command_runner.go:130] > systemd 252 (252)
	I0805 16:20:46.993660    4640 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0805 16:20:46.993799    4640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 16:20:46.998949    4640 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0805 16:20:46.998969    4640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:20:46.999002    4640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:20:47.012276    4640 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0805 16:20:47.012544    4640 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:20:47.012556    4640 start.go:495] detecting cgroup driver to use...
	I0805 16:20:47.012657    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:20:47.027593    4640 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0805 16:20:47.027660    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:20:47.035836    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:20:47.044911    4640 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:20:47.044968    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:20:47.053571    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:20:47.061858    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:20:47.070031    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:20:47.078524    4640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:20:47.087870    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:20:47.096303    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:20:47.104482    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:20:47.112756    4640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:20:47.120033    4640 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0805 16:20:47.120127    4640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:20:47.128644    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:47.220387    4640 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:20:47.239567    4640 start.go:495] detecting cgroup driver to use...
	I0805 16:20:47.239642    4640 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:20:47.254939    4640 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0805 16:20:47.255001    4640 command_runner.go:130] > [Unit]
	I0805 16:20:47.255011    4640 command_runner.go:130] > Description=Docker Application Container Engine
	I0805 16:20:47.255015    4640 command_runner.go:130] > Documentation=https://docs.docker.com
	I0805 16:20:47.255020    4640 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0805 16:20:47.255026    4640 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0805 16:20:47.255030    4640 command_runner.go:130] > StartLimitBurst=3
	I0805 16:20:47.255034    4640 command_runner.go:130] > StartLimitIntervalSec=60
	I0805 16:20:47.255037    4640 command_runner.go:130] > [Service]
	I0805 16:20:47.255041    4640 command_runner.go:130] > Type=notify
	I0805 16:20:47.255055    4640 command_runner.go:130] > Restart=on-failure
	I0805 16:20:47.255063    4640 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0805 16:20:47.255073    4640 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0805 16:20:47.255080    4640 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0805 16:20:47.255088    4640 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0805 16:20:47.255094    4640 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0805 16:20:47.255099    4640 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0805 16:20:47.255112    4640 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0805 16:20:47.255120    4640 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0805 16:20:47.255128    4640 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0805 16:20:47.255134    4640 command_runner.go:130] > ExecStart=
	I0805 16:20:47.255164    4640 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0805 16:20:47.255172    4640 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0805 16:20:47.255182    4640 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0805 16:20:47.255189    4640 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0805 16:20:47.255193    4640 command_runner.go:130] > LimitNOFILE=infinity
	I0805 16:20:47.255196    4640 command_runner.go:130] > LimitNPROC=infinity
	I0805 16:20:47.255200    4640 command_runner.go:130] > LimitCORE=infinity
	I0805 16:20:47.255205    4640 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0805 16:20:47.255209    4640 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0805 16:20:47.255212    4640 command_runner.go:130] > TasksMax=infinity
	I0805 16:20:47.255215    4640 command_runner.go:130] > TimeoutStartSec=0
	I0805 16:20:47.255220    4640 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0805 16:20:47.255225    4640 command_runner.go:130] > Delegate=yes
	I0805 16:20:47.255230    4640 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0805 16:20:47.255233    4640 command_runner.go:130] > KillMode=process
	I0805 16:20:47.255236    4640 command_runner.go:130] > [Install]
	I0805 16:20:47.255259    4640 command_runner.go:130] > WantedBy=multi-user.target
	I0805 16:20:47.255324    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:20:47.269909    4640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:20:47.286027    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:20:47.296365    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:20:47.306405    4640 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:20:47.369760    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:20:47.379998    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:20:47.394696    4640 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0805 16:20:47.394951    4640 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:20:47.397850    4640 command_runner.go:130] > /usr/bin/cri-dockerd
	I0805 16:20:47.398038    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:20:47.406063    4640 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:20:47.419537    4640 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:20:47.514227    4640 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:20:47.637079    4640 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:20:47.637156    4640 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:20:47.651314    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:47.748259    4640 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:20:50.076345    4640 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.32806615s)
	I0805 16:20:50.076407    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0805 16:20:50.086580    4640 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0805 16:20:50.099944    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:20:50.110410    4640 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0805 16:20:50.206329    4640 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0805 16:20:50.317239    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:50.417670    4640 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0805 16:20:50.431617    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:20:50.443305    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:50.555307    4640 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0805 16:20:50.610408    4640 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0805 16:20:50.610481    4640 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0805 16:20:50.614751    4640 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0805 16:20:50.614762    4640 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0805 16:20:50.614767    4640 command_runner.go:130] > Device: 0,22	Inode: 806         Links: 1
	I0805 16:20:50.614772    4640 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0805 16:20:50.614775    4640 command_runner.go:130] > Access: 2024-08-05 23:20:50.735793184 +0000
	I0805 16:20:50.614784    4640 command_runner.go:130] > Modify: 2024-08-05 23:20:50.735793184 +0000
	I0805 16:20:50.614789    4640 command_runner.go:130] > Change: 2024-08-05 23:20:50.736793062 +0000
	I0805 16:20:50.614792    4640 command_runner.go:130] >  Birth: -
	I0805 16:20:50.614829    4640 start.go:563] Will wait 60s for crictl version
	I0805 16:20:50.614890    4640 ssh_runner.go:195] Run: which crictl
	I0805 16:20:50.617807    4640 command_runner.go:130] > /usr/bin/crictl
	I0805 16:20:50.617933    4640 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 16:20:50.644026    4640 command_runner.go:130] > Version:  0.1.0
	I0805 16:20:50.644070    4640 command_runner.go:130] > RuntimeName:  docker
	I0805 16:20:50.644117    4640 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0805 16:20:50.644195    4640 command_runner.go:130] > RuntimeApiVersion:  v1
	I0805 16:20:50.645396    4640 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0805 16:20:50.645460    4640 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:20:50.661131    4640 command_runner.go:130] > 27.1.1
	I0805 16:20:50.662194    4640 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:20:50.677860    4640 command_runner.go:130] > 27.1.1
	I0805 16:20:50.700872    4640 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0805 16:20:50.700922    4640 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:20:50.701316    4640 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0805 16:20:50.706154    4640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:20:50.715610    4640 kubeadm.go:883] updating cluster {Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 16:20:50.715677    4640 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:20:50.715736    4640 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:20:50.733572    4640 docker.go:685] Got preloaded images: 
	I0805 16:20:50.733584    4640 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0805 16:20:50.733634    4640 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 16:20:50.741005    4640 command_runner.go:139] > {"Repositories":{}}
	I0805 16:20:50.741090    4640 ssh_runner.go:195] Run: which lz4
	I0805 16:20:50.744527    4640 command_runner.go:130] > /usr/bin/lz4
	I0805 16:20:50.744558    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0805 16:20:50.744692    4640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 16:20:50.747718    4640 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 16:20:50.747836    4640 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 16:20:50.747851    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0805 16:20:51.865752    4640 docker.go:649] duration metric: took 1.121114736s to copy over tarball
	I0805 16:20:51.865833    4640 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 16:20:54.241811    4640 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.375959074s)
	I0805 16:20:54.241825    4640 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 16:20:54.267125    4640 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 16:20:54.275283    4640 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.3":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.3":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.3":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d2
89d99da794784d1"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.3":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0805 16:20:54.275373    4640 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0805 16:20:54.288931    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:54.386395    4640 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:20:56.795159    4640 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.408741228s)
	I0805 16:20:56.795248    4640 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:20:56.808093    4640 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0805 16:20:56.808107    4640 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0805 16:20:56.808111    4640 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0805 16:20:56.808116    4640 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0805 16:20:56.808120    4640 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0805 16:20:56.808123    4640 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0805 16:20:56.808128    4640 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0805 16:20:56.808135    4640 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:20:56.809018    4640 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0805 16:20:56.809035    4640 cache_images.go:84] Images are preloaded, skipping loading
	I0805 16:20:56.809048    4640 kubeadm.go:934] updating node { 192.169.0.13 8443 v1.30.3 docker true true} ...
	I0805 16:20:56.809127    4640 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-985000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 16:20:56.809195    4640 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0805 16:20:56.847007    4640 command_runner.go:130] > cgroupfs
	I0805 16:20:56.847610    4640 cni.go:84] Creating CNI manager for ""
	I0805 16:20:56.847620    4640 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 16:20:56.847630    4640 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 16:20:56.847650    4640 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.13 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-985000 NodeName:multinode-985000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 16:20:56.847744    4640 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-985000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 16:20:56.847807    4640 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 16:20:56.855919    4640 command_runner.go:130] > kubeadm
	I0805 16:20:56.855931    4640 command_runner.go:130] > kubectl
	I0805 16:20:56.855934    4640 command_runner.go:130] > kubelet
	I0805 16:20:56.855959    4640 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 16:20:56.856010    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 16:20:56.863284    4640 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0805 16:20:56.876753    4640 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 16:20:56.890292    4640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0805 16:20:56.904628    4640 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I0805 16:20:56.907711    4640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:20:56.917108    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:57.013172    4640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:20:57.028650    4640 certs.go:68] Setting up /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000 for IP: 192.169.0.13
	I0805 16:20:57.028663    4640 certs.go:194] generating shared ca certs ...
	I0805 16:20:57.028674    4640 certs.go:226] acquiring lock for ca certs: {Name:mkb83e058d89c7d4e66f4136f377a3c305b13735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.028863    4640 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key
	I0805 16:20:57.028935    4640 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key
	I0805 16:20:57.028946    4640 certs.go:256] generating profile certs ...
	I0805 16:20:57.028995    4640 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key
	I0805 16:20:57.029007    4640 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt with IP's: []
	I0805 16:20:57.088127    4640 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt ...
	I0805 16:20:57.088142    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt: {Name:mkb7087fa165ae496621b10df42dfd2f8603360a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.088531    4640 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key ...
	I0805 16:20:57.088540    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key: {Name:mk37e627de9c39a2300d317d721ebf92a202a17e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.088775    4640 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec
	I0805 16:20:57.088790    4640 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt.5b7978ec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.13]
	I0805 16:20:57.189318    4640 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt.5b7978ec ...
	I0805 16:20:57.189336    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt.5b7978ec: {Name:mkb4501af4f6db766eb719de2f42fc564a23d2d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.189653    4640 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec ...
	I0805 16:20:57.189669    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec: {Name:mke641ddecfc5629bb592a5b6321d446ed3b31bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.189903    4640 certs.go:381] copying /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt.5b7978ec -> /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt
	I0805 16:20:57.190140    4640 certs.go:385] copying /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec -> /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key
	I0805 16:20:57.190318    4640 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key
	I0805 16:20:57.190336    4640 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt with IP's: []
	I0805 16:20:57.386717    4640 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt ...
	I0805 16:20:57.386733    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt: {Name:mk486344c8c5b8383e5349f68a995b553e8d31c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.387043    4640 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key ...
	I0805 16:20:57.387052    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key: {Name:mk2b24e1a5e962e12395adf21e4f6ad64901ee0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.387278    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 16:20:57.387306    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 16:20:57.387325    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 16:20:57.387349    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 16:20:57.387368    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 16:20:57.387391    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 16:20:57.387411    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 16:20:57.387432    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 16:20:57.387531    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem (1338 bytes)
	W0805 16:20:57.387583    4640 certs.go:480] ignoring /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678_empty.pem, impossibly tiny 0 bytes
	I0805 16:20:57.387591    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 16:20:57.387621    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem (1082 bytes)
	I0805 16:20:57.387656    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem (1123 bytes)
	I0805 16:20:57.387684    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem (1675 bytes)
	I0805 16:20:57.387747    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:20:57.387781    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem -> /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.387803    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.387822    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.388188    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 16:20:57.408800    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 16:20:57.429927    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 16:20:57.449924    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0805 16:20:57.470736    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0805 16:20:57.490564    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 16:20:57.511342    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 16:20:57.531190    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 16:20:57.551984    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem --> /usr/share/ca-certificates/1678.pem (1338 bytes)
	I0805 16:20:57.571601    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /usr/share/ca-certificates/16782.pem (1708 bytes)
	I0805 16:20:57.592369    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 16:20:57.611866    4640 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 16:20:57.626527    4640 ssh_runner.go:195] Run: openssl version
	I0805 16:20:57.630504    4640 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0805 16:20:57.630711    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1678.pem && ln -fs /usr/share/ca-certificates/1678.pem /etc/ssl/certs/1678.pem"
	I0805 16:20:57.638913    4640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.642115    4640 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  5 22:58 /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.642280    4640 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 22:58 /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.642315    4640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.646345    4640 command_runner.go:130] > 51391683
	I0805 16:20:57.646544    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1678.pem /etc/ssl/certs/51391683.0"
	I0805 16:20:57.654953    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16782.pem && ln -fs /usr/share/ca-certificates/16782.pem /etc/ssl/certs/16782.pem"
	I0805 16:20:57.663842    4640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.667242    4640 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  5 22:58 /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.667258    4640 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 22:58 /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.667300    4640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.671438    4640 command_runner.go:130] > 3ec20f2e
	I0805 16:20:57.671648    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16782.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 16:20:57.679692    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 16:20:57.688061    4640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.691411    4640 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.691493    4640 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.691531    4640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.695572    4640 command_runner.go:130] > b5213941
	I0805 16:20:57.695754    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 16:20:57.704703    4640 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 16:20:57.707752    4640 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 16:20:57.707872    4640 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 16:20:57.707921    4640 kubeadm.go:392] StartCluster: {Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:20:57.708054    4640 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 16:20:57.720408    4640 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 16:20:57.731114    4640 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0805 16:20:57.731128    4640 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0805 16:20:57.731133    4640 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0805 16:20:57.731194    4640 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 16:20:57.739645    4640 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 16:20:57.751095    4640 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0805 16:20:57.751108    4640 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0805 16:20:57.751113    4640 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0805 16:20:57.751120    4640 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 16:20:57.751266    4640 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 16:20:57.751273    4640 kubeadm.go:157] found existing configuration files:
	
	I0805 16:20:57.751324    4640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 16:20:57.759086    4640 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 16:20:57.759185    4640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 16:20:57.759233    4640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 16:20:57.769060    4640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 16:20:57.778103    4640 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 16:20:57.778143    4640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 16:20:57.778190    4640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 16:20:57.786612    4640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 16:20:57.794733    4640 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 16:20:57.794754    4640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 16:20:57.794796    4640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 16:20:57.802671    4640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 16:20:57.810242    4640 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 16:20:57.810264    4640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 16:20:57.810299    4640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 16:20:57.818339    4640 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 16:20:57.890449    4640 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0805 16:20:57.890461    4640 command_runner.go:130] > [init] Using Kubernetes version: v1.30.3
	I0805 16:20:57.890501    4640 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 16:20:57.890507    4640 command_runner.go:130] > [preflight] Running pre-flight checks
	I0805 16:20:57.984851    4640 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 16:20:57.984855    4640 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 16:20:57.984956    4640 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 16:20:57.984962    4640 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 16:20:57.985041    4640 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 16:20:57.985038    4640 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 16:20:58.152965    4640 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 16:20:58.152995    4640 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 16:20:58.175785    4640 out.go:204]   - Generating certificates and keys ...
	I0805 16:20:58.175840    4640 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0805 16:20:58.175851    4640 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 16:20:58.175914    4640 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0805 16:20:58.175920    4640 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 16:20:58.229002    4640 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0805 16:20:58.229016    4640 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0805 16:20:58.322701    4640 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0805 16:20:58.322717    4640 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0805 16:20:58.394063    4640 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0805 16:20:58.394077    4640 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0805 16:20:58.601975    4640 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0805 16:20:58.601995    4640 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0805 16:20:58.821056    4640 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0805 16:20:58.821065    4640 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0805 16:20:58.821204    4640 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-985000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0805 16:20:58.821214    4640 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-985000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0805 16:20:59.150811    4640 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0805 16:20:59.150817    4640 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0805 16:20:59.151036    4640 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-985000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0805 16:20:59.151046    4640 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-985000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0805 16:20:59.206073    4640 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0805 16:20:59.206088    4640 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0805 16:20:59.294956    4640 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0805 16:20:59.294966    4640 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0805 16:20:59.348591    4640 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0805 16:20:59.348602    4640 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0805 16:20:59.348788    4640 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 16:20:59.348797    4640 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 16:20:59.511379    4640 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 16:20:59.511395    4640 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 16:20:59.789652    4640 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 16:20:59.789666    4640 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 16:20:59.965508    4640 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 16:20:59.965517    4640 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 16:21:00.208268    4640 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 16:21:00.208284    4640 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 16:21:00.402575    4640 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 16:21:00.402582    4640 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 16:21:00.409122    4640 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 16:21:00.409137    4640 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 16:21:00.410639    4640 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 16:21:00.410652    4640 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 16:21:00.430944    4640 out.go:204]   - Booting up control plane ...
	I0805 16:21:00.431017    4640 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 16:21:00.431032    4640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 16:21:00.431106    4640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 16:21:00.431106    4640 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 16:21:00.431174    4640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 16:21:00.431182    4640 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 16:21:00.431274    4640 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 16:21:00.431286    4640 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 16:21:00.431361    4640 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 16:21:00.431369    4640 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 16:21:00.431399    4640 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 16:21:00.431405    4640 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0805 16:21:00.540991    4640 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 16:21:00.541004    4640 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 16:21:00.541076    4640 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 16:21:00.541081    4640 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 16:21:01.042556    4640 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.719164ms
	I0805 16:21:01.042573    4640 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.719164ms
	I0805 16:21:01.042632    4640 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 16:21:01.042639    4640 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 16:21:05.541995    4640 kubeadm.go:310] [api-check] The API server is healthy after 4.502407968s
	I0805 16:21:05.542014    4640 command_runner.go:130] > [api-check] The API server is healthy after 4.502407968s
	I0805 16:21:05.551474    4640 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 16:21:05.551486    4640 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 16:21:05.558278    4640 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 16:21:05.558284    4640 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 16:21:05.572116    4640 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 16:21:05.572130    4640 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0805 16:21:05.572281    4640 kubeadm.go:310] [mark-control-plane] Marking the node multinode-985000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 16:21:05.572292    4640 command_runner.go:130] > [mark-control-plane] Marking the node multinode-985000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 16:21:05.579214    4640 kubeadm.go:310] [bootstrap-token] Using token: 0mwls8.ribzsy6ooov2flu0
	I0805 16:21:05.579225    4640 command_runner.go:130] > [bootstrap-token] Using token: 0mwls8.ribzsy6ooov2flu0
	I0805 16:21:05.613851    4640 out.go:204]   - Configuring RBAC rules ...
	I0805 16:21:05.613974    4640 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 16:21:05.613988    4640 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 16:21:05.655317    4640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 16:21:05.655329    4640 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 16:21:05.659733    4640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 16:21:05.659737    4640 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 16:21:05.661608    4640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 16:21:05.661619    4640 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 16:21:05.663605    4640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 16:21:05.663612    4640 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 16:21:05.665771    4640 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 16:21:05.665778    4640 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 16:21:05.947572    4640 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 16:21:05.947585    4640 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 16:21:06.357765    4640 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 16:21:06.357776    4640 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0805 16:21:06.946930    4640 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 16:21:06.946942    4640 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0805 16:21:06.947937    4640 kubeadm.go:310] 
	I0805 16:21:06.947989    4640 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 16:21:06.947996    4640 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0805 16:21:06.948000    4640 kubeadm.go:310] 
	I0805 16:21:06.948071    4640 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 16:21:06.948080    4640 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0805 16:21:06.948088    4640 kubeadm.go:310] 
	I0805 16:21:06.948121    4640 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 16:21:06.948125    4640 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0805 16:21:06.948179    4640 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 16:21:06.948187    4640 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 16:21:06.948229    4640 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 16:21:06.948234    4640 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 16:21:06.948237    4640 kubeadm.go:310] 
	I0805 16:21:06.948284    4640 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 16:21:06.948302    4640 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0805 16:21:06.948309    4640 kubeadm.go:310] 
	I0805 16:21:06.948354    4640 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 16:21:06.948367    4640 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 16:21:06.948375    4640 kubeadm.go:310] 
	I0805 16:21:06.948414    4640 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 16:21:06.948418    4640 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0805 16:21:06.948479    4640 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 16:21:06.948488    4640 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 16:21:06.948558    4640 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 16:21:06.948564    4640 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 16:21:06.948570    4640 kubeadm.go:310] 
	I0805 16:21:06.948633    4640 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 16:21:06.948638    4640 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0805 16:21:06.948701    4640 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 16:21:06.948708    4640 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0805 16:21:06.948715    4640 kubeadm.go:310] 
	I0805 16:21:06.948788    4640 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0mwls8.ribzsy6ooov2flu0 \
	I0805 16:21:06.948795    4640 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 0mwls8.ribzsy6ooov2flu0 \
	I0805 16:21:06.948879    4640 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:524477c6809305b6c0c2d082a15767bdfc04953bf05f4ba28f6a5db30aba8adf \
	I0805 16:21:06.948886    4640 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:524477c6809305b6c0c2d082a15767bdfc04953bf05f4ba28f6a5db30aba8adf \
	I0805 16:21:06.948905    4640 kubeadm.go:310] 	--control-plane 
	I0805 16:21:06.948911    4640 command_runner.go:130] > 	--control-plane 
	I0805 16:21:06.948916    4640 kubeadm.go:310] 
	I0805 16:21:06.948980    4640 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 16:21:06.948984    4640 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0805 16:21:06.948987    4640 kubeadm.go:310] 
	I0805 16:21:06.949052    4640 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0mwls8.ribzsy6ooov2flu0 \
	I0805 16:21:06.949057    4640 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 0mwls8.ribzsy6ooov2flu0 \
	I0805 16:21:06.949136    4640 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:524477c6809305b6c0c2d082a15767bdfc04953bf05f4ba28f6a5db30aba8adf 
	I0805 16:21:06.949141    4640 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:524477c6809305b6c0c2d082a15767bdfc04953bf05f4ba28f6a5db30aba8adf 
	I0805 16:21:06.949613    4640 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 16:21:06.949621    4640 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 16:21:06.949644    4640 cni.go:84] Creating CNI manager for ""
	I0805 16:21:06.949649    4640 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 16:21:06.972147    4640 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0805 16:21:07.030449    4640 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0805 16:21:07.036220    4640 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0805 16:21:07.036233    4640 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0805 16:21:07.036239    4640 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0805 16:21:07.036249    4640 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0805 16:21:07.036254    4640 command_runner.go:130] > Access: 2024-08-05 23:20:43.694299549 +0000
	I0805 16:21:07.036259    4640 command_runner.go:130] > Modify: 2024-07-29 16:10:03.000000000 +0000
	I0805 16:21:07.036264    4640 command_runner.go:130] > Change: 2024-08-05 23:20:41.058596444 +0000
	I0805 16:21:07.036266    4640 command_runner.go:130] >  Birth: -
	I0805 16:21:07.036368    4640 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0805 16:21:07.036375    4640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0805 16:21:07.050414    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0805 16:21:07.243070    4640 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0805 16:21:07.246445    4640 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0805 16:21:07.250670    4640 command_runner.go:130] > serviceaccount/kindnet created
	I0805 16:21:07.255971    4640 command_runner.go:130] > daemonset.apps/kindnet created
	I0805 16:21:07.257424    4640 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 16:21:07.257500    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-985000 minikube.k8s.io/updated_at=2024_08_05T16_21_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4 minikube.k8s.io/name=multinode-985000 minikube.k8s.io/primary=true
	I0805 16:21:07.257502    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:07.266956    4640 command_runner.go:130] > -16
	I0805 16:21:07.267023    4640 ops.go:34] apiserver oom_adj: -16
	I0805 16:21:07.390396    4640 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0805 16:21:07.392070    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:07.400579    4640 command_runner.go:130] > node/multinode-985000 labeled
	I0805 16:21:07.456213    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:07.893323    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:07.956622    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:08.392391    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:08.450793    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:08.892411    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:08.950456    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:09.393238    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:09.450291    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:09.892156    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:09.951159    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:10.393019    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:10.451734    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:10.893100    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:10.954360    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:11.393009    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:11.452879    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:11.894187    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:11.953480    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:12.392194    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:12.452444    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:12.894265    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:12.955367    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:13.392882    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:13.455680    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:13.892568    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:13.950195    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:14.393254    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:14.452940    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:14.892187    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:14.948447    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:15.392762    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:15.451815    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:15.892531    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:15.952781    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:16.393008    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:16.454659    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:16.892423    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:16.957989    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:17.392489    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:17.452653    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:17.892453    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:17.953809    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:18.392692    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:18.450726    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:18.893940    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:18.957266    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:19.393402    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:19.452345    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:19.892761    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:19.952524    4640 command_runner.go:130] > NAME      SECRETS   AGE
	I0805 16:21:19.952537    4640 command_runner.go:130] > default   0         1s
	I0805 16:21:19.952551    4640 kubeadm.go:1113] duration metric: took 12.695106906s to wait for elevateKubeSystemPrivileges
	I0805 16:21:19.952568    4640 kubeadm.go:394] duration metric: took 22.244643678s to StartCluster
	I0805 16:21:19.952584    4640 settings.go:142] acquiring lock: {Name:mk564a817a54ecf2aef16a4d2309e85208c0231f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:21:19.952678    4640 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:21:19.953130    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/kubeconfig: {Name:mk2a0d8b4d330b3c26432fc65d015ddf98a9cc93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:21:19.953387    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0805 16:21:19.953391    4640 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:21:19.953437    4640 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 16:21:19.953474    4640 addons.go:69] Setting storage-provisioner=true in profile "multinode-985000"
	I0805 16:21:19.953501    4640 addons.go:234] Setting addon storage-provisioner=true in "multinode-985000"
	I0805 16:21:19.953507    4640 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:21:19.953501    4640 addons.go:69] Setting default-storageclass=true in profile "multinode-985000"
	I0805 16:21:19.953520    4640 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:21:19.953542    4640 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-985000"
	I0805 16:21:19.953772    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.953787    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.953870    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.953897    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.962985    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52500
	I0805 16:21:19.963341    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52502
	I0805 16:21:19.963365    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.963645    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.963722    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.963735    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.963997    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.964004    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.964027    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.964249    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.964372    4640 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:21:19.964430    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.964458    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:19.964465    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.964535    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:21:19.966651    4640 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:21:19.966874    4640 kapi.go:59] client config for multinode-985000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xed05060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:21:19.967275    4640 cert_rotation.go:137] Starting client certificate rotation controller
	I0805 16:21:19.967411    4640 addons.go:234] Setting addon default-storageclass=true in "multinode-985000"
	I0805 16:21:19.967434    4640 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:21:19.967665    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.967688    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.973226    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52504
	I0805 16:21:19.973568    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.973922    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.973942    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.974163    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.974282    4640 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:21:19.974363    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:19.974444    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:21:19.975405    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:21:19.975491    4640 out.go:177] * Verifying Kubernetes components...
	I0805 16:21:19.976182    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52506
	I0805 16:21:19.976461    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.976795    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.976812    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.976999    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.977392    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.977409    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.986027    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52508
	I0805 16:21:19.986361    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.986712    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.986741    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.986959    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.987071    4640 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:21:19.987149    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:19.987227    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:21:19.988179    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:21:19.988299    4640 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 16:21:19.988307    4640 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 16:21:19.988315    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:21:19.988395    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:21:19.988484    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:21:19.988568    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:21:19.988639    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:21:20.032241    4640 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:21:20.032361    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:21:20.069496    4640 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 16:21:20.069510    4640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 16:21:20.069530    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:21:20.069717    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:21:20.069824    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:21:20.069935    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:21:20.070041    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:21:20.084762    4640 command_runner.go:130] > apiVersion: v1
	I0805 16:21:20.084775    4640 command_runner.go:130] > data:
	I0805 16:21:20.084779    4640 command_runner.go:130] >   Corefile: |
	I0805 16:21:20.084782    4640 command_runner.go:130] >     .:53 {
	I0805 16:21:20.084785    4640 command_runner.go:130] >         errors
	I0805 16:21:20.084790    4640 command_runner.go:130] >         health {
	I0805 16:21:20.084794    4640 command_runner.go:130] >            lameduck 5s
	I0805 16:21:20.084796    4640 command_runner.go:130] >         }
	I0805 16:21:20.084812    4640 command_runner.go:130] >         ready
	I0805 16:21:20.084822    4640 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0805 16:21:20.084829    4640 command_runner.go:130] >            pods insecure
	I0805 16:21:20.084833    4640 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0805 16:21:20.084841    4640 command_runner.go:130] >            ttl 30
	I0805 16:21:20.084853    4640 command_runner.go:130] >         }
	I0805 16:21:20.084863    4640 command_runner.go:130] >         prometheus :9153
	I0805 16:21:20.084868    4640 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0805 16:21:20.084880    4640 command_runner.go:130] >            max_concurrent 1000
	I0805 16:21:20.084884    4640 command_runner.go:130] >         }
	I0805 16:21:20.084887    4640 command_runner.go:130] >         cache 30
	I0805 16:21:20.084898    4640 command_runner.go:130] >         loop
	I0805 16:21:20.084902    4640 command_runner.go:130] >         reload
	I0805 16:21:20.084905    4640 command_runner.go:130] >         loadbalance
	I0805 16:21:20.084908    4640 command_runner.go:130] >     }
	I0805 16:21:20.084911    4640 command_runner.go:130] > kind: ConfigMap
	I0805 16:21:20.084914    4640 command_runner.go:130] > metadata:
	I0805 16:21:20.084921    4640 command_runner.go:130] >   creationTimestamp: "2024-08-05T23:21:06Z"
	I0805 16:21:20.084926    4640 command_runner.go:130] >   name: coredns
	I0805 16:21:20.084929    4640 command_runner.go:130] >   namespace: kube-system
	I0805 16:21:20.084933    4640 command_runner.go:130] >   resourceVersion: "266"
	I0805 16:21:20.084937    4640 command_runner.go:130] >   uid: 5057af03-8824-4e67-a4b6-ef90c1ded7ce
	I0805 16:21:20.085056    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0805 16:21:20.184335    4640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:21:20.203408    4640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 16:21:20.278639    4640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 16:21:20.507141    4640 command_runner.go:130] > configmap/coredns replaced
	I0805 16:21:20.511660    4640 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0805 16:21:20.511929    4640 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:21:20.511932    4640 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:21:20.512124    4640 kapi.go:59] client config for multinode-985000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xed05060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:21:20.512125    4640 kapi.go:59] client config for multinode-985000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xed05060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:21:20.512341    4640 node_ready.go:35] waiting up to 6m0s for node "multinode-985000" to be "Ready" ...
	I0805 16:21:20.512409    4640 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0805 16:21:20.512416    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.512423    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.512424    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:20.512428    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.512430    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.512438    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.512446    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.520076    4640 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0805 16:21:20.520087    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.520092    4640 round_trippers.go:580]     Audit-Id: 304f14c4-a466-4fb6-b401-b28f4df4dfa1
	I0805 16:21:20.520095    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.520103    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.520107    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.520111    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.520113    4640 round_trippers.go:580]     Content-Length: 291
	I0805 16:21:20.520117    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.521443    4640 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0805 16:21:20.521456    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.521464    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.521474    4640 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7bdcac2f-ecae-4bb5-9dd4-4f2479d63a63","resourceVersion":"381","creationTimestamp":"2024-08-05T23:21:06Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0805 16:21:20.521479    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.521487    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.521502    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.521511    4640 round_trippers.go:580]     Audit-Id: bcd9e393-6b08-4ffb-a73b-6e7c430f0212
	I0805 16:21:20.521518    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.521831    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:20.521865    4640 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7bdcac2f-ecae-4bb5-9dd4-4f2479d63a63","resourceVersion":"381","creationTimestamp":"2024-08-05T23:21:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0805 16:21:20.521904    4640 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0805 16:21:20.521914    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.521921    4640 round_trippers.go:473]     Content-Type: application/json
	I0805 16:21:20.521930    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.521935    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.530726    4640 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0805 16:21:20.530739    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.530744    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.530748    4640 round_trippers.go:580]     Content-Length: 291
	I0805 16:21:20.530751    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.530754    4640 round_trippers.go:580]     Audit-Id: ba15a3b2-b69b-473e-a331-81e01385ad47
	I0805 16:21:20.530756    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.530758    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.530761    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.530773    4640 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7bdcac2f-ecae-4bb5-9dd4-4f2479d63a63","resourceVersion":"383","creationTimestamp":"2024-08-05T23:21:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0805 16:21:20.588534    4640 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0805 16:21:20.588563    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.588570    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.588737    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.588752    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.588765    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.588764    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.588772    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.588919    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.588920    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.588931    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.589012    4640 round_trippers.go:463] GET https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses
	I0805 16:21:20.589020    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.589028    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.589034    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.597496    4640 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0805 16:21:20.597508    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.597513    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.597518    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.597521    4640 round_trippers.go:580]     Content-Length: 1273
	I0805 16:21:20.597523    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.597525    4640 round_trippers.go:580]     Audit-Id: d7394cfc-1eb3-4623-8a7f-a5088a0398c8
	I0805 16:21:20.597527    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.597530    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.597844    4640 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"391"},"items":[{"metadata":{"name":"standard","uid":"34b9c98b-1b12-420a-8576-fd00c496f57b","resourceVersion":"387","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0805 16:21:20.598117    4640 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"34b9c98b-1b12-420a-8576-fd00c496f57b","resourceVersion":"387","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0805 16:21:20.598145    4640 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0805 16:21:20.598150    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.598157    4640 round_trippers.go:473]     Content-Type: application/json
	I0805 16:21:20.598166    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.598171    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.619819    4640 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0805 16:21:20.619836    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.619842    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.619846    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.619849    4640 round_trippers.go:580]     Content-Length: 1220
	I0805 16:21:20.619852    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.619855    4640 round_trippers.go:580]     Audit-Id: 299d4cc8-0cb5-4dd5-80b3-5d54592ecd90
	I0805 16:21:20.619859    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.619861    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.619898    4640 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"34b9c98b-1b12-420a-8576-fd00c496f57b","resourceVersion":"387","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0805 16:21:20.619983    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.619992    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.620141    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.620153    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.620166    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.750372    4640 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0805 16:21:20.753871    4640 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0805 16:21:20.759257    4640 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0805 16:21:20.767575    4640 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0805 16:21:20.774745    4640 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0805 16:21:20.786454    4640 command_runner.go:130] > pod/storage-provisioner created
	I0805 16:21:20.787838    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.787851    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.788087    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.788087    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.788098    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.788109    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.788117    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.788261    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.788280    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.788280    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.811467    4640 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0805 16:21:20.871433    4640 addons.go:510] duration metric: took 917.995637ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0805 16:21:21.014507    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:21.014532    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:21.014545    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:21.014553    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:21.014605    4640 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0805 16:21:21.014619    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:21.014631    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:21.014638    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:21.017465    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:21.017464    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:21.017480    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:21.017492    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:21.017492    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:21.017496    4640 round_trippers.go:580]     Content-Length: 291
	I0805 16:21:21.017502    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:21 GMT
	I0805 16:21:21.017504    4640 round_trippers.go:580]     Audit-Id: fb264fed-80ee-469b-a34e-7b1e8460f94b
	I0805 16:21:21.017506    4640 round_trippers.go:580]     Audit-Id: c9362211-8dfc-4385-87db-76c6486df53e
	I0805 16:21:21.017512    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:21.017513    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:21.017518    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:21.017519    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:21.017522    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:21.017524    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:21.017529    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:21.017545    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:21 GMT
	I0805 16:21:21.017616    4640 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7bdcac2f-ecae-4bb5-9dd4-4f2479d63a63","resourceVersion":"395","creationTimestamp":"2024-08-05T23:21:06Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0805 16:21:21.017684    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:21.017735    4640 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-985000" context rescaled to 1 replicas
	I0805 16:21:21.514170    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:21.514200    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:21.514219    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:21.514226    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:21.516804    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:21.516819    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:21.516826    4640 round_trippers.go:580]     Audit-Id: 9396255c-231d-48cb-a53f-22663307b969
	I0805 16:21:21.516830    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:21.516834    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:21.516839    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:21.516849    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:21.516854    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:21 GMT
	I0805 16:21:21.516951    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:22.013275    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:22.013299    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:22.013311    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:22.013319    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:22.016138    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:22.016155    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:22.016163    4640 round_trippers.go:580]     Audit-Id: cc869aef-9ab4-4a7f-8835-cce2afa76dd9
	I0805 16:21:22.016168    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:22.016175    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:22.016182    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:22.016187    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:22.016193    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:22 GMT
	I0805 16:21:22.016497    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:22.512546    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:22.512561    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:22.512567    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:22.512572    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:22.515381    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:22.515393    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:22.515401    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:22.515407    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:22.515412    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:22.515416    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:22 GMT
	I0805 16:21:22.515420    4640 round_trippers.go:580]     Audit-Id: e7d470a0-7df5-4d85-9bb5-cbf15cfa989f
	I0805 16:21:22.515423    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:22.515634    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:22.515838    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:23.012594    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:23.012606    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:23.012612    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:23.012616    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:23.014085    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:23.014095    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:23.014101    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:23.014104    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:23.014107    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:23.014109    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:23.014113    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:23 GMT
	I0805 16:21:23.014116    4640 round_trippers.go:580]     Audit-Id: e12d5034-3bd9-498b-844e-12133805ded9
	I0805 16:21:23.014306    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:23.513150    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:23.513163    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:23.513168    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:23.513172    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:23.514595    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:23.514604    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:23.514610    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:23.514614    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:23.514617    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:23.514619    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:23.514622    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:23 GMT
	I0805 16:21:23.514635    4640 round_trippers.go:580]     Audit-Id: 2bc52e3b-1575-453f-87fa-51f4301a9426
	I0805 16:21:23.514871    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:24.012814    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:24.012826    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:24.012832    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:24.012835    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:24.014366    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:24.014379    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:24.014384    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:24.014388    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:24.014406    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:24.014411    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:24.014414    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:24 GMT
	I0805 16:21:24.014417    4640 round_trippers.go:580]     Audit-Id: f14d8611-e5e1-45fe-92f3-95559148c71b
	I0805 16:21:24.014572    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:24.513607    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:24.513620    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:24.513626    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:24.513629    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:24.515210    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:24.515220    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:24.515242    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:24.515253    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:24.515260    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:24.515264    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:24.515268    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:24 GMT
	I0805 16:21:24.515271    4640 round_trippers.go:580]     Audit-Id: 0a897d84-d437-4212-b36d-e414fedf55d4
	I0805 16:21:24.515427    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:25.013253    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:25.013272    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:25.013283    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:25.013321    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:25.015275    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:25.015308    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:25.015317    4640 round_trippers.go:580]     Audit-Id: ced7b45c-a072-4322-89ab-d0cc21ddfb1d
	I0805 16:21:25.015322    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:25.015325    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:25.015328    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:25.015332    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:25.015336    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:25 GMT
	I0805 16:21:25.015627    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:25.015849    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:25.512881    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:25.512902    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:25.512914    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:25.512920    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:25.515502    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:25.515517    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:25.515524    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:25.515529    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:25.515534    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:25.515538    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:25.515542    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:25 GMT
	I0805 16:21:25.515545    4640 round_trippers.go:580]     Audit-Id: dd6b59c1-dde3-4d67-b446-8823ad717d4f
	I0805 16:21:25.515665    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:26.013787    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:26.013811    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:26.013824    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:26.013830    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:26.016420    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:26.016440    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:26.016463    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:26 GMT
	I0805 16:21:26.016470    4640 round_trippers.go:580]     Audit-Id: 19939705-2879-44e6-830c-0c86394087ed
	I0805 16:21:26.016473    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:26.016485    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:26.016490    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:26.016494    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:26.016965    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:26.512523    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:26.512536    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:26.512541    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:26.512544    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:26.514158    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:26.514167    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:26.514172    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:26.514176    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:26.514179    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:26.514182    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:26 GMT
	I0805 16:21:26.514184    4640 round_trippers.go:580]     Audit-Id: f2346665-2701-41e1-94b0-41a70aa2f170
	I0805 16:21:26.514187    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:26.514489    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:27.013107    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:27.013136    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:27.013148    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:27.013155    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:27.015615    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:27.015632    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:27.015639    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:27 GMT
	I0805 16:21:27.015655    4640 round_trippers.go:580]     Audit-Id: 6abee22d-c1db-48e9-99db-e07791ed571f
	I0805 16:21:27.015661    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:27.015664    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:27.015667    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:27.015672    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:27.015747    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:27.015996    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:27.513549    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:27.513570    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:27.513582    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:27.513589    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:27.516173    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:27.516189    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:27.516197    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:27.516200    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:27.516204    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:27.516209    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:27 GMT
	I0805 16:21:27.516212    4640 round_trippers.go:580]     Audit-Id: a227585b-ae23-4bd1-b1dc-643eadd970cc
	I0805 16:21:27.516215    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:27.516416    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:28.014104    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:28.014132    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:28.014143    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:28.014159    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:28.016690    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:28.016705    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:28.016713    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:28.016717    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:28.016721    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:28.016725    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:28.016728    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:28 GMT
	I0805 16:21:28.016731    4640 round_trippers.go:580]     Audit-Id: 0d14831c-cc1f-41a9-a252-85e191b9594d
	I0805 16:21:28.016834    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:28.512703    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:28.512726    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:28.512739    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:28.512747    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:28.515176    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:28.515190    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:28.515197    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:28 GMT
	I0805 16:21:28.515201    4640 round_trippers.go:580]     Audit-Id: 6af459f8-bb08-43bf-ac7f-51ccacd5d664
	I0805 16:21:28.515206    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:28.515211    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:28.515215    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:28.515219    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:28.515378    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:29.013324    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:29.013354    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:29.013360    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:29.013364    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:29.014793    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:29.014804    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:29.014809    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:29.014813    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:29 GMT
	I0805 16:21:29.014817    4640 round_trippers.go:580]     Audit-Id: 2e50ff34-0c55-4136-b537-eee73f73706d
	I0805 16:21:29.014819    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:29.014822    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:29.014826    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:29.015098    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:29.513802    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:29.513832    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:29.513844    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:29.513852    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:29.516479    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:29.516496    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:29.516504    4640 round_trippers.go:580]     Audit-Id: bcbc3920-26b4-45f4-b91a-ce0e3dc11770
	I0805 16:21:29.516529    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:29.516538    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:29.516544    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:29.516549    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:29.516554    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:29 GMT
	I0805 16:21:29.516682    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:29.516938    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:30.013325    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:30.013349    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:30.013436    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:30.013448    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:30.016209    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:30.016222    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:30.016228    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:30.016233    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:30 GMT
	I0805 16:21:30.016238    4640 round_trippers.go:580]     Audit-Id: fb0bd3e0-89c3-4c77-a27d-be315cab22b7
	I0805 16:21:30.016242    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:30.016277    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:30.016283    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:30.016477    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:30.514344    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:30.514386    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:30.514482    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:30.514494    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:30.518828    4640 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 16:21:30.518860    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:30.518870    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:30.518876    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:30.518882    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:30 GMT
	I0805 16:21:30.518888    4640 round_trippers.go:580]     Audit-Id: c1b08932-ee78-4dcb-a190-3a8b24421284
	I0805 16:21:30.518894    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:30.518899    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:30.519002    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:31.012673    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:31.012701    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:31.012712    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:31.012718    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:31.015543    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:31.015560    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:31.015568    4640 round_trippers.go:580]     Audit-Id: b6586a64-ec07-44ee-8a00-1f3b8a00e0bd
	I0805 16:21:31.015572    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:31.015576    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:31.015580    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:31.015583    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:31.015589    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:31 GMT
	I0805 16:21:31.015682    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:31.512531    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:31.512543    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:31.512550    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:31.512554    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:31.514066    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:31.514076    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:31.514081    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:31 GMT
	I0805 16:21:31.514085    4640 round_trippers.go:580]     Audit-Id: 7d410de7-b0d5-4d4e-8455-d31b0df7d302
	I0805 16:21:31.514089    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:31.514093    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:31.514096    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:31.514107    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:31.514758    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:32.014110    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:32.014136    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:32.014147    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:32.014157    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:32.016553    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:32.016570    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:32.016580    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:32.016586    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:32.016592    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:32.016598    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:32.016602    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:32 GMT
	I0805 16:21:32.016605    4640 round_trippers.go:580]     Audit-Id: 67fdb64b-273a-46c2-aac5-c3b115422aa4
	I0805 16:21:32.016861    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:32.017132    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:32.513171    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:32.513188    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:32.513195    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:32.513198    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:32.514908    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:32.514920    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:32.514925    4640 round_trippers.go:580]     Audit-Id: 0f5a2e98-6be6-4963-8897-91c70642048c
	I0805 16:21:32.514928    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:32.514931    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:32.514933    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:32.514936    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:32.514939    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:32 GMT
	I0805 16:21:32.515082    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:33.013769    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:33.013803    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:33.013814    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:33.013822    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:33.016491    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:33.016509    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:33.016519    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:33 GMT
	I0805 16:21:33.016526    4640 round_trippers.go:580]     Audit-Id: 96b5f269-7be9-42a9-9687-cba57d05f76e
	I0805 16:21:33.016532    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:33.016538    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:33.016543    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:33.016548    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:33.016715    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:33.512751    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:33.512772    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:33.512783    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:33.512789    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:33.515431    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:33.515480    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:33.515498    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:33.515506    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:33.515510    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:33 GMT
	I0805 16:21:33.515513    4640 round_trippers.go:580]     Audit-Id: 6cd252a3-d07d-441e-bcf4-bc3bd00c2488
	I0805 16:21:33.515517    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:33.515520    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:33.515747    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:34.013003    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:34.013032    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:34.013043    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:34.013052    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:34.015447    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:34.015465    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:34.015472    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:34.015476    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:34.015479    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:34.015484    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:34.015487    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:34 GMT
	I0805 16:21:34.015492    4640 round_trippers.go:580]     Audit-Id: efcfb0d1-8345-4db5-bce9-e31085842da3
	I0805 16:21:34.015599    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:34.513298    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:34.513317    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:34.513376    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:34.513383    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:34.515051    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:34.515065    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:34.515072    4640 round_trippers.go:580]     Audit-Id: 2a42cb6a-0051-47bd-85f4-9f8ca80afa70
	I0805 16:21:34.515078    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:34.515081    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:34.515087    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:34.515099    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:34.515103    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:34 GMT
	I0805 16:21:34.515359    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:34.515540    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:35.013932    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:35.013957    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:35.013968    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:35.013976    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:35.016505    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:35.016524    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:35.016530    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:35.016537    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:35.016541    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:35.016544    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:35.016555    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:35 GMT
	I0805 16:21:35.016559    4640 round_trippers.go:580]     Audit-Id: 09fa0e04-c026-439e-9cd7-392fd82b16fe
	I0805 16:21:35.016913    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:35.513491    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:35.513514    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:35.513526    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:35.513532    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:35.515995    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:35.516012    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:35.516020    4640 round_trippers.go:580]     Audit-Id: a2b05a8a-9a91-4d20-93d0-b8701ac59b95
	I0805 16:21:35.516024    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:35.516036    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:35.516041    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:35.516055    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:35.516060    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:35 GMT
	I0805 16:21:35.516151    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:36.013521    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:36.013549    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.013561    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.013566    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.016095    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:36.016112    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.016119    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.016131    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.016136    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.016140    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.016144    4640 round_trippers.go:580]     Audit-Id: 77e04f39-a037-4ea2-9716-ad04139089d1
	I0805 16:21:36.016147    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.016230    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"423","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0805 16:21:36.016465    4640 node_ready.go:49] node "multinode-985000" has status "Ready":"True"
	I0805 16:21:36.016481    4640 node_ready.go:38] duration metric: took 15.504115701s for node "multinode-985000" to be "Ready" ...
	I0805 16:21:36.016489    4640 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:21:36.016543    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:36.016551    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.016559    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.016563    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.019046    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:36.019057    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.019065    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.019069    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.019078    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.019081    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.019084    4640 round_trippers.go:580]     Audit-Id: 96048303-6e62-4ba8-a291-bc1ad976756e
	I0805 16:21:36.019091    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.019721    4640 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"427","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56289 chars]
	I0805 16:21:36.021921    4640 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:36.021960    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:21:36.021964    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.021970    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.021974    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.023179    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:36.023187    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.023192    4640 round_trippers.go:580]     Audit-Id: ba42f387-f106-4773-86de-3a22085fd86a
	I0805 16:21:36.023195    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.023198    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.023200    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.023204    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.023208    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.023410    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"427","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0805 16:21:36.023652    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:36.023659    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.023665    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.023671    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.024732    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:36.024744    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.024752    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.024758    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.024765    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.024768    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.024771    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.024775    4640 round_trippers.go:580]     Audit-Id: 2008721c-b230-4e73-b037-d3a843d7c7c8
	I0805 16:21:36.024909    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"423","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0805 16:21:36.523495    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:21:36.523508    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.523514    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.523519    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.525003    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:36.525014    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.525020    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.525042    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.525049    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.525053    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.525060    4640 round_trippers.go:580]     Audit-Id: 1ad5a8dd-64b3-4881-9a8e-e5eaab368c53
	I0805 16:21:36.525066    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.525202    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"427","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0805 16:21:36.525483    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:36.525490    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.525498    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.525502    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.526801    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:36.526810    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.526814    4640 round_trippers.go:580]     Audit-Id: 71c4017f-a267-489e-86ed-59098eae3b88
	I0805 16:21:36.526817    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.526834    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.526840    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.526846    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.526850    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.527025    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"423","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0805 16:21:37.022759    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:21:37.022781    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.022791    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.022799    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.025487    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:37.025503    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.025510    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.025515    4640 round_trippers.go:580]     Audit-Id: 7446d9fd-22ed-4d20-b0f2-e8c4a88b04f4
	I0805 16:21:37.025536    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.025543    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.025547    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.025556    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.025649    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"427","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0805 16:21:37.026010    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.026020    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.026028    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.026033    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.027337    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:37.027346    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.027354    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.027359    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.027363    4640 round_trippers.go:580]     Audit-Id: a309eed4-f088-47f7-8b84-4761b59dbb8c
	I0805 16:21:37.027366    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.027368    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.027371    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.027425    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.522283    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:21:37.522304    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.522315    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.522322    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.524762    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:37.524776    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.524782    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.524788    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.524792    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.524795    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.524799    4640 round_trippers.go:580]     Audit-Id: eaef42a8-7b43-4091-9b70-8d31adc979e5
	I0805 16:21:37.524803    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.525073    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"443","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6576 chars]
	I0805 16:21:37.525438    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.525480    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.525488    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.525492    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.526890    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:37.526903    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.526912    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.526918    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.526927    4640 round_trippers.go:580]     Audit-Id: a3a0e71a-c982-4504-9fae-e76101688c05
	I0805 16:21:37.526931    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.526935    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.526937    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.527034    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.527211    4640 pod_ready.go:92] pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.527220    4640 pod_ready.go:81] duration metric: took 1.505289062s for pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.527230    4640 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.527259    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-985000
	I0805 16:21:37.527264    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.527269    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.527277    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.528379    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:37.528389    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.528394    4640 round_trippers.go:580]     Audit-Id: 3cf4f372-47fb-4b72-9b30-185d93d01537
	I0805 16:21:37.528401    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.528405    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.528408    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.528411    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.528414    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.528618    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-985000","namespace":"kube-system","uid":"8d7ca2d9-8c7b-41b9-a199-de6449107471","resourceVersion":"379","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.mirror":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.seen":"2024-08-05T23:21:06.366030299Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0805 16:21:37.528833    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.528840    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.528845    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.528850    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.529802    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.529808    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.529813    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.529816    4640 round_trippers.go:580]     Audit-Id: 314df0bd-894e-4607-bad0-3348c18fe807
	I0805 16:21:37.529820    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.529823    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.529826    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.529833    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.530046    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.530203    4640 pod_ready.go:92] pod "etcd-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.530210    4640 pod_ready.go:81] duration metric: took 2.974841ms for pod "etcd-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.530218    4640 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.530249    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-985000
	I0805 16:21:37.530253    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.530259    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.530262    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.531449    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:37.531456    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.531461    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.531463    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.531467    4640 round_trippers.go:580]     Audit-Id: 1801a8f0-22d5-44e8-942c-ea521b1ffa66
	I0805 16:21:37.531469    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.531475    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.531477    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.531592    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-985000","namespace":"kube-system","uid":"9be3378a-5fab-4907-baad-507918e714e4","resourceVersion":"369","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"5908531d711118eab279d6b15448dc42","kubernetes.io/config.mirror":"5908531d711118eab279d6b15448dc42","kubernetes.io/config.seen":"2024-08-05T23:21:06.366030949Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7684 chars]
	I0805 16:21:37.531810    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.531820    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.531825    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.531830    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.532663    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.532668    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.532672    4640 round_trippers.go:580]     Audit-Id: 6d0fc4ed-c609-4ee7-a57f-b61eed1bc442
	I0805 16:21:37.532675    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.532679    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.532682    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.532684    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.532688    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.532807    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.532958    4640 pod_ready.go:92] pod "kube-apiserver-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.532967    4640 pod_ready.go:81] duration metric: took 2.743443ms for pod "kube-apiserver-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.532973    4640 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.533000    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-985000
	I0805 16:21:37.533004    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.533009    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.533012    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.533987    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.533995    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.534000    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.534004    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.534020    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.534027    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.534031    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.534034    4640 round_trippers.go:580]     Audit-Id: 97e4dc5c-f4bf-419e-8b15-be800418054c
	I0805 16:21:37.534147    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-985000","namespace":"kube-system","uid":"4ad64361-65de-4b0b-b2a3-07df18c2e603","resourceVersion":"342","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e41fb21b40cd2f3bd83b000891f6569","kubernetes.io/config.mirror":"8e41fb21b40cd2f3bd83b000891f6569","kubernetes.io/config.seen":"2024-08-05T23:21:06.366027130Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7259 chars]
	I0805 16:21:37.534370    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.534377    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.534383    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.534386    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.535293    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.535301    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.535305    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.535308    4640 round_trippers.go:580]     Audit-Id: a4c04a0a-9401-41d1-a0fc-f2a2187abde4
	I0805 16:21:37.535310    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.535313    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.535320    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.535323    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.535432    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.535591    4640 pod_ready.go:92] pod "kube-controller-manager-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.535599    4640 pod_ready.go:81] duration metric: took 2.621545ms for pod "kube-controller-manager-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.535606    4640 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fwgw7" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.535629    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwgw7
	I0805 16:21:37.535634    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.535639    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.535643    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.536550    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.536557    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.536565    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.536570    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.536575    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.536578    4640 round_trippers.go:580]     Audit-Id: 5a688e80-7db3-4070-a1a8-c3419ddb4d44
	I0805 16:21:37.536580    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.536582    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.536704    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fwgw7","generateName":"kube-proxy-","namespace":"kube-system","uid":"3fb72e39-699d-4123-ae5e-e314a191d904","resourceVersion":"409","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8b6258e6-7b31-4600-b32b-4a269867c123","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8b6258e6-7b31-4600-b32b-4a269867c123\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0805 16:21:37.614745    4640 request.go:629] Waited for 77.807971ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.614815    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.614822    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.614839    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.614845    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.616956    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:37.616984    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.616989    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.616993    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.616996    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.616999    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.617002    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.617005    4640 round_trippers.go:580]     Audit-Id: e297627c-4c52-417b-935c-d406bf086c16
	I0805 16:21:37.617232    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.617428    4640 pod_ready.go:92] pod "kube-proxy-fwgw7" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.617437    4640 pod_ready.go:81] duration metric: took 81.82693ms for pod "kube-proxy-fwgw7" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.617444    4640 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.815296    4640 request.go:629] Waited for 197.761592ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-985000
	I0805 16:21:37.815347    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-985000
	I0805 16:21:37.815355    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.815366    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.815376    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.817961    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:37.817976    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.818001    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.818008    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:37.818049    4640 round_trippers.go:580]     Audit-Id: cc44c4e8-8012-4718-aa24-c05fec399a2e
	I0805 16:21:37.818064    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.818078    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.818082    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.818186    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-985000","namespace":"kube-system","uid":"5e23b1b7-e45d-4b43-831c-aa835c5e536d","resourceVersion":"396","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d110ae14602908970c81c0d8a5c21147","kubernetes.io/config.mirror":"d110ae14602908970c81c0d8a5c21147","kubernetes.io/config.seen":"2024-08-05T23:21:06.366029633Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0805 16:21:38.014472    4640 request.go:629] Waited for 195.947535ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:38.014569    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:38.014578    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.014589    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.014597    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.017395    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:38.017406    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.017413    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.017418    4640 round_trippers.go:580]     Audit-Id: 925efcbc-f43b-4431-905e-26927bb76a48
	I0805 16:21:38.017422    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.017428    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.017434    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.017441    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.017905    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:38.018153    4640 pod_ready.go:92] pod "kube-scheduler-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:38.018164    4640 pod_ready.go:81] duration metric: took 400.713995ms for pod "kube-scheduler-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:38.018173    4640 pod_ready.go:38] duration metric: took 2.001673669s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:21:38.018198    4640 api_server.go:52] waiting for apiserver process to appear ...
	I0805 16:21:38.018268    4640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:21:38.030133    4640 command_runner.go:130] > 1977
	I0805 16:21:38.030360    4640 api_server.go:72] duration metric: took 18.07694495s to wait for apiserver process to appear ...
	I0805 16:21:38.030369    4640 api_server.go:88] waiting for apiserver healthz status ...
	I0805 16:21:38.030384    4640 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:21:38.034009    4640 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0805 16:21:38.034048    4640 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I0805 16:21:38.034052    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.034058    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.034063    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.034646    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:38.034653    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.034658    4640 round_trippers.go:580]     Audit-Id: 9f5c9766-330c-4bb5-a5de-4c3a0fdbe474
	I0805 16:21:38.034662    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.034665    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.034668    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.034670    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.034673    4640 round_trippers.go:580]     Content-Length: 263
	I0805 16:21:38.034676    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.034687    4640 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0805 16:21:38.034733    4640 api_server.go:141] control plane version: v1.30.3
	I0805 16:21:38.034742    4640 api_server.go:131] duration metric: took 4.369143ms to wait for apiserver health ...
	I0805 16:21:38.034747    4640 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 16:21:38.213812    4640 request.go:629] Waited for 178.999213ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:38.213950    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:38.213960    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.213970    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.213980    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.217309    4640 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:21:38.217324    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.217331    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.217336    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.217363    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.217372    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.217377    4640 round_trippers.go:580]     Audit-Id: 0f21513f-44e7-4d2f-bacd-2a12fceef757
	I0805 16:21:38.217381    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.217979    4640 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"443","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0805 16:21:38.219249    4640 system_pods.go:59] 8 kube-system pods found
	I0805 16:21:38.219261    4640 system_pods.go:61] "coredns-7db6d8ff4d-fqtll" [4d8af129-475b-4185-8b0d-cbda67812964] Running
	I0805 16:21:38.219265    4640 system_pods.go:61] "etcd-multinode-985000" [8d7ca2d9-8c7b-41b9-a199-de6449107471] Running
	I0805 16:21:38.219268    4640 system_pods.go:61] "kindnet-tvtvg" [7dd4afe7-2a17-4298-823b-9955e43cfdb2] Running
	I0805 16:21:38.219271    4640 system_pods.go:61] "kube-apiserver-multinode-985000" [9be3378a-5fab-4907-baad-507918e714e4] Running
	I0805 16:21:38.219276    4640 system_pods.go:61] "kube-controller-manager-multinode-985000" [4ad64361-65de-4b0b-b2a3-07df18c2e603] Running
	I0805 16:21:38.219278    4640 system_pods.go:61] "kube-proxy-fwgw7" [3fb72e39-699d-4123-ae5e-e314a191d904] Running
	I0805 16:21:38.219280    4640 system_pods.go:61] "kube-scheduler-multinode-985000" [5e23b1b7-e45d-4b43-831c-aa835c5e536d] Running
	I0805 16:21:38.219283    4640 system_pods.go:61] "storage-provisioner" [72ec8458-5c62-43eb-9120-0146e6ccaf8f] Running
	I0805 16:21:38.219286    4640 system_pods.go:74] duration metric: took 184.535842ms to wait for pod list to return data ...
	I0805 16:21:38.219291    4640 default_sa.go:34] waiting for default service account to be created ...
	I0805 16:21:38.413643    4640 request.go:629] Waited for 194.308242ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:21:38.413680    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:21:38.413687    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.413695    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.413699    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.415522    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:38.415531    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.415536    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.415539    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.415543    4640 round_trippers.go:580]     Content-Length: 261
	I0805 16:21:38.415546    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.415548    4640 round_trippers.go:580]     Audit-Id: efc85c0c-9bbc-4cb7-8c14-19ba2f873800
	I0805 16:21:38.415551    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.415553    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.415563    4640 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b0626468-f73b-4e9b-8270-658495d43f4a","resourceVersion":"337","creationTimestamp":"2024-08-05T23:21:19Z"}}]}
	I0805 16:21:38.415681    4640 default_sa.go:45] found service account: "default"
	I0805 16:21:38.415690    4640 default_sa.go:55] duration metric: took 196.394719ms for default service account to be created ...
	I0805 16:21:38.415697    4640 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 16:21:38.613742    4640 request.go:629] Waited for 198.012461ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:38.613858    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:38.613864    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.613870    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.613874    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.616077    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:38.616090    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.616099    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.616106    4640 round_trippers.go:580]     Audit-Id: 3f8a6f23-788b-41c4-8dee-6ff59c02c21d
	I0805 16:21:38.616112    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.616116    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.616126    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.616143    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.616489    4640 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"443","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0805 16:21:38.617747    4640 system_pods.go:86] 8 kube-system pods found
	I0805 16:21:38.617761    4640 system_pods.go:89] "coredns-7db6d8ff4d-fqtll" [4d8af129-475b-4185-8b0d-cbda67812964] Running
	I0805 16:21:38.617766    4640 system_pods.go:89] "etcd-multinode-985000" [8d7ca2d9-8c7b-41b9-a199-de6449107471] Running
	I0805 16:21:38.617770    4640 system_pods.go:89] "kindnet-tvtvg" [7dd4afe7-2a17-4298-823b-9955e43cfdb2] Running
	I0805 16:21:38.617773    4640 system_pods.go:89] "kube-apiserver-multinode-985000" [9be3378a-5fab-4907-baad-507918e714e4] Running
	I0805 16:21:38.617776    4640 system_pods.go:89] "kube-controller-manager-multinode-985000" [4ad64361-65de-4b0b-b2a3-07df18c2e603] Running
	I0805 16:21:38.617780    4640 system_pods.go:89] "kube-proxy-fwgw7" [3fb72e39-699d-4123-ae5e-e314a191d904] Running
	I0805 16:21:38.617784    4640 system_pods.go:89] "kube-scheduler-multinode-985000" [5e23b1b7-e45d-4b43-831c-aa835c5e536d] Running
	I0805 16:21:38.617787    4640 system_pods.go:89] "storage-provisioner" [72ec8458-5c62-43eb-9120-0146e6ccaf8f] Running
	I0805 16:21:38.617792    4640 system_pods.go:126] duration metric: took 202.090644ms to wait for k8s-apps to be running ...
	I0805 16:21:38.617801    4640 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 16:21:38.617848    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:21:38.629448    4640 system_svc.go:56] duration metric: took 11.643357ms WaitForService to wait for kubelet
	I0805 16:21:38.629463    4640 kubeadm.go:582] duration metric: took 18.676048708s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:21:38.629475    4640 node_conditions.go:102] verifying NodePressure condition ...
	I0805 16:21:38.814057    4640 request.go:629] Waited for 184.539621ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I0805 16:21:38.814182    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0805 16:21:38.814193    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.814205    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.814213    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.817076    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:38.817092    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.817099    4640 round_trippers.go:580]     Audit-Id: 83bb2c88-8ae3-45b7-a0f6-9d3f9fead5f2
	I0805 16:21:38.817103    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.817112    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.817116    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.817123    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.817128    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:39 GMT
	I0805 16:21:38.817200    4640 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5011 chars]
	I0805 16:21:38.817474    4640 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:21:38.817490    4640 node_conditions.go:123] node cpu capacity is 2
	I0805 16:21:38.817502    4640 node_conditions.go:105] duration metric: took 188.023135ms to run NodePressure ...
	I0805 16:21:38.817512    4640 start.go:241] waiting for startup goroutines ...
	I0805 16:21:38.817520    4640 start.go:246] waiting for cluster config update ...
	I0805 16:21:38.817530    4640 start.go:255] writing updated cluster config ...
	I0805 16:21:38.838343    4640 out.go:177] 
	I0805 16:21:38.859405    4640 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:21:38.859465    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:21:38.881260    4640 out.go:177] * Starting "multinode-985000-m02" worker node in "multinode-985000" cluster
	I0805 16:21:38.923226    4640 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:21:38.923254    4640 cache.go:56] Caching tarball of preloaded images
	I0805 16:21:38.923425    4640 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:21:38.923439    4640 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:21:38.923503    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:21:38.924257    4640 start.go:360] acquireMachinesLock for multinode-985000-m02: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:21:38.924355    4640 start.go:364] duration metric: took 78.775µs to acquireMachinesLock for "multinode-985000-m02"
	I0805 16:21:38.924379    4640 start.go:93] Provisioning new machine with config: &{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0805 16:21:38.924443    4640 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0805 16:21:38.946258    4640 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 16:21:38.946431    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:38.946482    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:38.956315    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52515
	I0805 16:21:38.956651    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:38.957008    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:38.957028    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:38.957245    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:38.957408    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:21:38.957527    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:38.957642    4640 start.go:159] libmachine.API.Create for "multinode-985000" (driver="hyperkit")
	I0805 16:21:38.957663    4640 client.go:168] LocalClient.Create starting
	I0805 16:21:38.957697    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem
	I0805 16:21:38.957735    4640 main.go:141] libmachine: Decoding PEM data...
	I0805 16:21:38.957747    4640 main.go:141] libmachine: Parsing certificate...
	I0805 16:21:38.957790    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem
	I0805 16:21:38.957819    4640 main.go:141] libmachine: Decoding PEM data...
	I0805 16:21:38.957833    4640 main.go:141] libmachine: Parsing certificate...
	I0805 16:21:38.957849    4640 main.go:141] libmachine: Running pre-create checks...
	I0805 16:21:38.957855    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .PreCreateCheck
	I0805 16:21:38.957933    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:38.957959    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetConfigRaw
	I0805 16:21:38.967700    4640 main.go:141] libmachine: Creating machine...
	I0805 16:21:38.967725    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .Create
	I0805 16:21:38.967957    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:38.968233    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | I0805 16:21:38.967940    4677 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:21:38.968338    4640 main.go:141] libmachine: (multinode-985000-m02) Downloading /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1122/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 16:21:39.171726    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | I0805 16:21:39.171650    4677 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa...
	I0805 16:21:39.251408    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | I0805 16:21:39.251327    4677 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/multinode-985000-m02.rawdisk...
	I0805 16:21:39.251421    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Writing magic tar header
	I0805 16:21:39.251439    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Writing SSH key tar header
	I0805 16:21:39.252021    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | I0805 16:21:39.251983    4677 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02 ...
	I0805 16:21:39.622286    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:39.622309    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid
	I0805 16:21:39.622382    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Using UUID ab5b9c9f-9e28-4bc2-8fcd-b98fce011173
	I0805 16:21:39.647304    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Generated MAC a6:1c:88:9c:44:3
	I0805 16:21:39.647324    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000
	I0805 16:21:39.647363    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0805 16:21:39.647396    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0805 16:21:39.647440    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/multinode-985000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage,/Users/j
enkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"}
	I0805 16:21:39.647475    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ab5b9c9f-9e28-4bc2-8fcd-b98fce011173 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/multinode-985000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/mult
inode-985000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"
	I0805 16:21:39.647493    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:21:39.650407    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: Pid is 4678
	I0805 16:21:39.650823    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 0
	I0805 16:21:39.650838    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:39.650909    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:39.651807    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:39.651870    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:39.651899    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:39.651984    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:39.652006    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:39.652022    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:39.652032    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:39.652039    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:39.652046    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:39.652082    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:39.652100    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:39.652113    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:39.652123    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:39.652143    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:39.657903    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:21:39.666018    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:21:39.666937    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:21:39.666963    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:21:39.666975    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:21:39.666990    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:21:40.050205    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:21:40.050221    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:21:40.165006    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:21:40.165028    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:21:40.165042    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:21:40.165049    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:21:40.165899    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:21:40.165911    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:21:41.653048    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 1
	I0805 16:21:41.653066    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:41.653144    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:41.653911    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:41.653968    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:41.653979    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:41.653992    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:41.653998    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:41.654006    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:41.654015    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:41.654030    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:41.654045    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:41.654053    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:41.654061    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:41.654070    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:41.654078    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:41.654093    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:43.655366    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 2
	I0805 16:21:43.655382    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:43.655471    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:43.656243    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:43.656291    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:43.656301    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:43.656319    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:43.656329    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:43.656351    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:43.656362    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:43.656369    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:43.656375    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:43.656391    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:43.656406    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:43.656416    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:43.656423    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:43.656437    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:45.657345    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 3
	I0805 16:21:45.657361    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:45.657459    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:45.658214    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:45.658269    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:45.658278    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:45.658286    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:45.658295    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:45.658310    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:45.658321    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:45.658329    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:45.658337    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:45.658349    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:45.658362    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:45.658370    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:45.658378    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:45.658387    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:45.751756    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:45 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:21:45.751812    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:45 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:21:45.751830    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:45 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:21:45.774801    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:45 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:21:47.659182    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 4
	I0805 16:21:47.659208    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:47.659291    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:47.660062    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:47.660112    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:47.660128    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:47.660137    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:47.660145    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:47.660153    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:47.660162    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:47.660178    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:47.660192    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:47.660204    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:47.660218    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:47.660230    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:47.660240    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:47.660260    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:49.662115    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 5
	I0805 16:21:49.662148    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:49.662310    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:49.663748    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:49.663812    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I0805 16:21:49.663831    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b00c}
	I0805 16:21:49.663846    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found match: a6:1c:88:9c:44:3
	I0805 16:21:49.663856    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | IP: 192.169.0.14
	I0805 16:21:49.663945    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetConfigRaw
	I0805 16:21:49.664855    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:49.665006    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:49.665127    4640 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 16:21:49.665139    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetState
	I0805 16:21:49.665271    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:49.665344    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:49.666326    4640 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 16:21:49.666337    4640 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 16:21:49.666342    4640 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 16:21:49.666348    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.666471    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:49.666603    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.666743    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.666869    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:49.667045    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:49.667279    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:49.667287    4640 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 16:21:49.724369    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:21:49.724382    4640 main.go:141] libmachine: Detecting the provisioner...
	I0805 16:21:49.724388    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.724522    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:49.724626    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.724719    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.724810    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:49.724938    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:49.725087    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:49.725094    4640 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 16:21:49.782403    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 16:21:49.782454    4640 main.go:141] libmachine: found compatible host: buildroot
	I0805 16:21:49.782460    4640 main.go:141] libmachine: Provisioning with buildroot...
	I0805 16:21:49.782466    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:21:49.782595    4640 buildroot.go:166] provisioning hostname "multinode-985000-m02"
	I0805 16:21:49.782606    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:21:49.782698    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.782797    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:49.782871    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.782964    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.783079    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:49.783204    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:49.783350    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:49.783359    4640 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-985000-m02 && echo "multinode-985000-m02" | sudo tee /etc/hostname
	I0805 16:21:49.854175    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-985000-m02
	
	I0805 16:21:49.854190    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.854319    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:49.854421    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.854492    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.854587    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:49.854712    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:49.854870    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:49.854882    4640 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-985000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-985000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-985000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:21:49.917814    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:21:49.917830    4640 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:21:49.917840    4640 buildroot.go:174] setting up certificates
	I0805 16:21:49.917846    4640 provision.go:84] configureAuth start
	I0805 16:21:49.917856    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:21:49.917985    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:21:49.918095    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.918192    4640 provision.go:143] copyHostCerts
	I0805 16:21:49.918223    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:21:49.918280    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:21:49.918285    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:21:49.918411    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:21:49.918617    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:21:49.918652    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:21:49.918658    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:21:49.918733    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:21:49.918888    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:21:49.918922    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:21:49.918927    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:21:49.918994    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:21:49.919145    4640 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.multinode-985000-m02 san=[127.0.0.1 192.169.0.14 localhost minikube multinode-985000-m02]
	I0805 16:21:50.072896    4640 provision.go:177] copyRemoteCerts
	I0805 16:21:50.072947    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:21:50.072962    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:50.073107    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:50.073199    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.073317    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:50.073426    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:21:50.108446    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:21:50.108519    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:21:50.128617    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:21:50.128684    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0805 16:21:50.148653    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:21:50.148720    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:21:50.168682    4640 provision.go:87] duration metric: took 250.828344ms to configureAuth
	I0805 16:21:50.168695    4640 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:21:50.168835    4640 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:21:50.168849    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:50.168993    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:50.169087    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:50.169175    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.169262    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.169345    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:50.169486    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:50.169621    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:50.169628    4640 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:21:50.228062    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:21:50.228074    4640 buildroot.go:70] root file system type: tmpfs
	I0805 16:21:50.228150    4640 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:21:50.228164    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:50.228293    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:50.228388    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.228480    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.228586    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:50.228755    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:50.228888    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:50.228934    4640 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.13"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:21:50.296901    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.13
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:21:50.296919    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:50.297064    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:50.297158    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.297250    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.297333    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:50.297475    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:50.297611    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:50.297624    4640 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:21:51.873922    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:21:51.873940    4640 main.go:141] libmachine: Checking connection to Docker...
	I0805 16:21:51.873964    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetURL
	I0805 16:21:51.874107    4640 main.go:141] libmachine: Docker is up and running!
	I0805 16:21:51.874115    4640 main.go:141] libmachine: Reticulating splines...
	I0805 16:21:51.874120    4640 client.go:171] duration metric: took 12.916447572s to LocalClient.Create
	I0805 16:21:51.874129    4640 start.go:167] duration metric: took 12.916485141s to libmachine.API.Create "multinode-985000"
	I0805 16:21:51.874135    4640 start.go:293] postStartSetup for "multinode-985000-m02" (driver="hyperkit")
	I0805 16:21:51.874142    4640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:21:51.874152    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:51.874292    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:21:51.874313    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:51.874416    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:51.874505    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:51.874583    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:51.874657    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:21:51.915394    4640 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:21:51.919538    4640 command_runner.go:130] > NAME=Buildroot
	I0805 16:21:51.919549    4640 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0805 16:21:51.919553    4640 command_runner.go:130] > ID=buildroot
	I0805 16:21:51.919557    4640 command_runner.go:130] > VERSION_ID=2023.02.9
	I0805 16:21:51.919560    4640 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0805 16:21:51.919635    4640 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:21:51.919645    4640 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:21:51.919746    4640 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:21:51.919897    4640 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:21:51.919903    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:21:51.920070    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:21:51.929531    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:21:51.959146    4640 start.go:296] duration metric: took 85.003807ms for postStartSetup
	I0805 16:21:51.959174    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetConfigRaw
	I0805 16:21:51.959830    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:21:51.959996    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:21:51.960355    4640 start.go:128] duration metric: took 13.03589336s to createHost
	I0805 16:21:51.960370    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:51.960461    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:51.960532    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:51.960607    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:51.960679    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:51.960792    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:51.960921    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:51.960928    4640 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 16:21:52.018527    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722900112.019707412
	
	I0805 16:21:52.018539    4640 fix.go:216] guest clock: 1722900112.019707412
	I0805 16:21:52.018544    4640 fix.go:229] Guest: 2024-08-05 16:21:52.019707412 -0700 PDT Remote: 2024-08-05 16:21:51.960363 -0700 PDT m=+79.692294773 (delta=59.344412ms)
	I0805 16:21:52.018555    4640 fix.go:200] guest clock delta is within tolerance: 59.344412ms
	I0805 16:21:52.018561    4640 start.go:83] releasing machines lock for "multinode-985000-m02", held for 13.094193048s
	I0805 16:21:52.018577    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:52.018703    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:21:52.040117    4640 out.go:177] * Found network options:
	I0805 16:21:52.084887    4640 out.go:177]   - NO_PROXY=192.169.0.13
	W0805 16:21:52.106885    4640 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:21:52.106945    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:52.107811    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:52.108153    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:52.108320    4640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:21:52.108371    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	W0805 16:21:52.108412    4640 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:21:52.108519    4640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 16:21:52.108545    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:52.108628    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:52.108772    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:52.108842    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:52.108951    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:52.109026    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:52.109176    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:52.109197    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:21:52.109323    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:21:52.141829    4640 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0805 16:21:52.141939    4640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:21:52.141993    4640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:21:52.191903    4640 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0805 16:21:52.192466    4640 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0805 16:21:52.192507    4640 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:21:52.192514    4640 start.go:495] detecting cgroup driver to use...
	I0805 16:21:52.192581    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:21:52.208225    4640 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0805 16:21:52.208528    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:21:52.217078    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:21:52.225489    4640 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:21:52.225534    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:21:52.233992    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:21:52.242465    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:21:52.250835    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:21:52.260065    4640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:21:52.268863    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:21:52.277242    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:21:52.285501    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:21:52.293845    4640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:21:52.301185    4640 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0805 16:21:52.301319    4640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:21:52.308881    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:21:52.403323    4640 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:21:52.423722    4640 start.go:495] detecting cgroup driver to use...
	I0805 16:21:52.423794    4640 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:21:52.442557    4640 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0805 16:21:52.443108    4640 command_runner.go:130] > [Unit]
	I0805 16:21:52.443119    4640 command_runner.go:130] > Description=Docker Application Container Engine
	I0805 16:21:52.443124    4640 command_runner.go:130] > Documentation=https://docs.docker.com
	I0805 16:21:52.443128    4640 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0805 16:21:52.443132    4640 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0805 16:21:52.443136    4640 command_runner.go:130] > StartLimitBurst=3
	I0805 16:21:52.443141    4640 command_runner.go:130] > StartLimitIntervalSec=60
	I0805 16:21:52.443147    4640 command_runner.go:130] > [Service]
	I0805 16:21:52.443151    4640 command_runner.go:130] > Type=notify
	I0805 16:21:52.443155    4640 command_runner.go:130] > Restart=on-failure
	I0805 16:21:52.443160    4640 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13
	I0805 16:21:52.443165    4640 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0805 16:21:52.443175    4640 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0805 16:21:52.443182    4640 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0805 16:21:52.443188    4640 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0805 16:21:52.443194    4640 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0805 16:21:52.443200    4640 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0805 16:21:52.443212    4640 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0805 16:21:52.443224    4640 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0805 16:21:52.443231    4640 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0805 16:21:52.443234    4640 command_runner.go:130] > ExecStart=
	I0805 16:21:52.443246    4640 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0805 16:21:52.443250    4640 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0805 16:21:52.443256    4640 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0805 16:21:52.443262    4640 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0805 16:21:52.443265    4640 command_runner.go:130] > LimitNOFILE=infinity
	I0805 16:21:52.443269    4640 command_runner.go:130] > LimitNPROC=infinity
	I0805 16:21:52.443272    4640 command_runner.go:130] > LimitCORE=infinity
	I0805 16:21:52.443277    4640 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0805 16:21:52.443282    4640 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0805 16:21:52.443285    4640 command_runner.go:130] > TasksMax=infinity
	I0805 16:21:52.443290    4640 command_runner.go:130] > TimeoutStartSec=0
	I0805 16:21:52.443296    4640 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0805 16:21:52.443299    4640 command_runner.go:130] > Delegate=yes
	I0805 16:21:52.443304    4640 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0805 16:21:52.443313    4640 command_runner.go:130] > KillMode=process
	I0805 16:21:52.443317    4640 command_runner.go:130] > [Install]
	I0805 16:21:52.443321    4640 command_runner.go:130] > WantedBy=multi-user.target
	I0805 16:21:52.443454    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:21:52.455112    4640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:21:52.472976    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:21:52.485648    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:21:52.496640    4640 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:21:52.520742    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:21:52.532843    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:21:52.547391    4640 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0805 16:21:52.547619    4640 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:21:52.550475    4640 command_runner.go:130] > /usr/bin/cri-dockerd
	I0805 16:21:52.550551    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:21:52.558821    4640 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:21:52.572801    4640 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:21:52.669948    4640 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:21:52.772017    4640 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:21:52.772038    4640 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:21:52.785587    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:21:52.887001    4640 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:22:53.782764    4640 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0805 16:22:53.782779    4640 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0805 16:22:53.782788    4640 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.895755367s)
	I0805 16:22:53.782849    4640 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0805 16:22:53.791796    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0805 16:22:53.791808    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578059613Z" level=info msg="Starting up"
	I0805 16:22:53.791820    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578746899Z" level=info msg="containerd not running, starting managed containerd"
	I0805 16:22:53.791833    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.579364099Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	I0805 16:22:53.791843    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.597194743Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0805 16:22:53.791853    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613422882Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0805 16:22:53.791865    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613448264Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0805 16:22:53.791875    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613527396Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0805 16:22:53.791884    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613540484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791897    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613598776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.791906    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613664323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791924    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613844698Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.791936    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613881896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791948    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613894727Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.791957    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613902000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791967    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614005875Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791976    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614259691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791991    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615867073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.792000    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615974584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.792024    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616138996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.792033    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616172823Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0805 16:22:53.792042    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616291383Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0805 16:22:53.792050    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616398312Z" level=info msg="metadata content store policy set" policy=shared
	I0805 16:22:53.792059    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.618998610Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0805 16:22:53.792068    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619065338Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0805 16:22:53.792076    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619081703Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0805 16:22:53.792085    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619092273Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0805 16:22:53.792094    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619101426Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0805 16:22:53.792103    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619164798Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0805 16:22:53.792113    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619370752Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0805 16:22:53.792121    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619460644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0805 16:22:53.792129    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619495461Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0805 16:22:53.792138    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619506581Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0805 16:22:53.792148    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619515758Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792158    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619524383Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792170    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619532546Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792178    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619541391Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792187    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619550990Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792197    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619565508Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792266    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619576616Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792278    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619584035Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792291    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619598072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792299    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619608190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792307    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619616319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792316    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619625389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792326    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619634123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792335    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619648148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792344    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619658942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792353    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619667668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792362    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619676302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792371    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619686416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792380    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619694011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792388    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619701566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792397    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619709342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792406    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619719250Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0805 16:22:53.792415    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619733203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792423    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619741785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792432    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619749153Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0805 16:22:53.792442    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619797467Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0805 16:22:53.792454    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619811479Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0805 16:22:53.792467    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619819137Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0805 16:22:53.792661    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619826861Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0805 16:22:53.792673    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619833500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792682    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619841896Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0805 16:22:53.792690    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619852419Z" level=info msg="NRI interface is disabled by configuration."
	I0805 16:22:53.792702    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620071162Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0805 16:22:53.792710    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620124755Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0805 16:22:53.792718    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620155079Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0805 16:22:53.792725    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620168148Z" level=info msg="containerd successfully booted in 0.023750s"
	I0805 16:22:53.792734    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.639692405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0805 16:22:53.792741    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.644102102Z" level=info msg="Loading containers: start."
	I0805 16:22:53.792763    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.740540264Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0805 16:22:53.792774    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.826229634Z" level=info msg="Loading containers: done."
	I0805 16:22:53.792783    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843276878Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	I0805 16:22:53.792792    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843375843Z" level=info msg="Daemon has completed initialization"
	I0805 16:22:53.792800    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869275976Z" level=info msg="API listen on /var/run/docker.sock"
	I0805 16:22:53.792807    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869434474Z" level=info msg="API listen on [::]:2376"
	I0805 16:22:53.792813    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 systemd[1]: Started Docker Application Container Engine.
	I0805 16:22:53.792821    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.919662359Z" level=info msg="Processing signal 'terminated'"
	I0805 16:22:53.792829    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920773928Z" level=info msg="Daemon shutdown complete"
	I0805 16:22:53.792840    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920792538Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0805 16:22:53.792852    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920845272Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0805 16:22:53.792861    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920858866Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0805 16:22:53.792868    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0805 16:22:53.792874    4640 command_runner.go:130] > Aug 05 23:21:53 multinode-985000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0805 16:22:53.792904    4640 command_runner.go:130] > Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0805 16:22:53.792911    4640 command_runner.go:130] > Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0805 16:22:53.792918    4640 command_runner.go:130] > Aug 05 23:21:53 multinode-985000-m02 dockerd[923]: time="2024-08-05T23:21:53.957339969Z" level=info msg="Starting up"
	I0805 16:22:53.792929    4640 command_runner.go:130] > Aug 05 23:22:53 multinode-985000-m02 dockerd[923]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0805 16:22:53.792940    4640 command_runner.go:130] > Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0805 16:22:53.792946    4640 command_runner.go:130] > Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0805 16:22:53.792952    4640 command_runner.go:130] > Aug 05 23:22:53 multinode-985000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0805 16:22:53.817223    4640 out.go:177] 
	W0805 16:22:53.838182    4640 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 05 23:21:50 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578059613Z" level=info msg="Starting up"
	Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578746899Z" level=info msg="containerd not running, starting managed containerd"
	Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.579364099Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.597194743Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613422882Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613448264Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613527396Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613540484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613598776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613664323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613844698Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613881896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613894727Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613902000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614005875Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614259691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615867073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615974584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616138996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616172823Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616291383Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616398312Z" level=info msg="metadata content store policy set" policy=shared
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.618998610Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619065338Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619081703Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619092273Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619101426Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619164798Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619370752Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619460644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619495461Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619506581Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619515758Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619524383Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619532546Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619541391Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619550990Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619565508Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619576616Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619584035Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619598072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619608190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619616319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619625389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619634123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619648148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619658942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619667668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619676302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619686416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619694011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619701566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619709342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619719250Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619733203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619741785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619749153Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619797467Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619811479Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619819137Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619826861Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619833500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619841896Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619852419Z" level=info msg="NRI interface is disabled by configuration."
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620071162Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620124755Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620155079Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620168148Z" level=info msg="containerd successfully booted in 0.023750s"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.639692405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.644102102Z" level=info msg="Loading containers: start."
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.740540264Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.826229634Z" level=info msg="Loading containers: done."
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843276878Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843375843Z" level=info msg="Daemon has completed initialization"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869275976Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869434474Z" level=info msg="API listen on [::]:2376"
	Aug 05 23:21:51 multinode-985000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.919662359Z" level=info msg="Processing signal 'terminated'"
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920773928Z" level=info msg="Daemon shutdown complete"
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920792538Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920845272Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920858866Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 05 23:21:52 multinode-985000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 05 23:21:53 multinode-985000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:21:53 multinode-985000-m02 dockerd[923]: time="2024-08-05T23:21:53.957339969Z" level=info msg="Starting up"
	Aug 05 23:22:53 multinode-985000-m02 dockerd[923]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 05 23:22:53 multinode-985000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0805 16:22:53.838301    4640 out.go:239] * 
	W0805 16:22:53.839537    4640 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:22:53.901092    4640 out.go:177] 
	
	
	==> Docker <==
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.538240622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.545949341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.546006859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.546094356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.546213245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 cri-dockerd[1167]: time="2024-08-05T23:21:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2a8cd74365e92f179bb6ee1ce28c9364c192d2bf64c54e8b18c5339cfbdf5dcd/resolv.conf as [nameserver 192.169.0.1]"
	Aug 05 23:21:36 multinode-985000 cri-dockerd[1167]: time="2024-08-05T23:21:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/35b9ac42edc06af57c697463456d60a00f8d9d12849ef967af1e639bc238e3b3/resolv.conf as [nameserver 192.169.0.1]"
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.715025205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.715620680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.716022138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.717088853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.755323726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.755409641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.755418837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.764703174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:22:57 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:57.493861515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:22:57 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:57.493963422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:22:57 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:57.494329548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:22:57 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:57.494770138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:22:57 multinode-985000 cri-dockerd[1167]: time="2024-08-05T23:22:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/abfb33d4f204dd0b2a7ffc533336cce5539144674b64125ac7373b0be8961559/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 05 23:22:58 multinode-985000 cri-dockerd[1167]: time="2024-08-05T23:22:58Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Aug 05 23:22:58 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:58.841390849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:22:58 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:58.841491056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:22:58 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:58.841532145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:22:58 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:58.841640743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0cbc162071e51       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   11 minutes ago      Running             busybox                   0                   abfb33d4f204d       busybox-fc5497c4f-44k5g
	c9365aec33892       cbb01a7bd410d                                                                                         12 minutes ago      Running             coredns                   0                   35b9ac42edc06       coredns-7db6d8ff4d-fqtll
	3d9fd612d0b14       6e38f40d628db                                                                                         12 minutes ago      Running             storage-provisioner       0                   2a8cd74365e92       storage-provisioner
	724e5cfab0a27       kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3              13 minutes ago      Running             kindnet-cni               0                   65a1122097f07       kindnet-tvtvg
	d58ca48f9f8b2       55bb025d2cfa5                                                                                         13 minutes ago      Running             kube-proxy                0                   c91338eb0e138       kube-proxy-fwgw7
	792feba1a6f6b       3edc18e7b7672                                                                                         13 minutes ago      Running             kube-scheduler            0                   c86e04eb7823b       kube-scheduler-multinode-985000
	1fdd85b796ab3       3861cfcd7c04c                                                                                         13 minutes ago      Running             etcd                      0                   b58900db52990       etcd-multinode-985000
	d11865076c645       76932a3b37d7e                                                                                         13 minutes ago      Running             kube-controller-manager   0                   55a20063845e3       kube-controller-manager-multinode-985000
	608878b33f358       1f6d574d502f3                                                                                         13 minutes ago      Running             kube-apiserver            0                   569788c2699f1       kube-apiserver-multinode-985000
	
	
	==> coredns [c9365aec3389] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57821 - 19682 "HINFO IN 7732396596932693360.4385804994640298901. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.014623104s
	[INFO] 10.244.0.3:44234 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136193s
	[INFO] 10.244.0.3:37423 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.058799401s
	[INFO] 10.244.0.3:57961 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.010090318s
	[INFO] 10.244.0.3:37799 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.012765436s
	[INFO] 10.244.0.3:46499 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000078364s
	[INFO] 10.244.0.3:42436 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011216992s
	[INFO] 10.244.0.3:35880 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000144767s
	[INFO] 10.244.0.3:39224 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104006s
	[INFO] 10.244.0.3:48536 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.013324615s
	[INFO] 10.244.0.3:55841 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000221823s
	[INFO] 10.244.0.3:46712 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000111417s
	[INFO] 10.244.0.3:51982 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099744s
	[INFO] 10.244.0.3:55425 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080184s
	[INFO] 10.244.0.3:58084 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119904s
	[INFO] 10.244.0.3:57892 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049065s
	[INFO] 10.244.0.3:52329 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000049128s
	[INFO] 10.244.0.3:60384 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000083319s
	[INFO] 10.244.0.3:51923 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000058598s
	[INFO] 10.244.0.3:37985 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00007256s
	[INFO] 10.244.0.3:45792 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000071025s
	
	
	==> describe nodes <==
	Name:               multinode-985000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-985000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=multinode-985000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T16_21_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:21:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-985000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:34:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:33:23 +0000   Mon, 05 Aug 2024 23:21:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:33:23 +0000   Mon, 05 Aug 2024 23:21:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:33:23 +0000   Mon, 05 Aug 2024 23:21:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:33:23 +0000   Mon, 05 Aug 2024 23:21:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.13
	  Hostname:    multinode-985000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 43d0d80c8ac846e58ac4351481e2a76f
	  System UUID:                3ac6443b-0000-0000-898d-9b152fa64288
	  Boot ID:                    382df761-aca3-4a9d-bdce-655bf0444398
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-44k5g                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-fqtll                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-985000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-tvtvg                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-985000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-multinode-985000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-fwgw7                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-985000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node multinode-985000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node multinode-985000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node multinode-985000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node multinode-985000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node multinode-985000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node multinode-985000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node multinode-985000 event: Registered Node multinode-985000 in Controller
	  Normal  NodeReady                12m                kubelet          Node multinode-985000 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.261909] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.788416] systemd-fstab-generator[490]: Ignoring "noauto" option for root device
	[  +0.099076] systemd-fstab-generator[502]: Ignoring "noauto" option for root device
	[  +1.730104] systemd-fstab-generator[841]: Ignoring "noauto" option for root device
	[  +0.293514] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.050985] kauditd_printk_skb: 95 callbacks suppressed
	[  +0.056812] systemd-fstab-generator[892]: Ignoring "noauto" option for root device
	[  +0.126132] systemd-fstab-generator[906]: Ignoring "noauto" option for root device
	[  +2.458612] systemd-fstab-generator[1120]: Ignoring "noauto" option for root device
	[  +0.104830] systemd-fstab-generator[1132]: Ignoring "noauto" option for root device
	[  +0.110549] systemd-fstab-generator[1144]: Ignoring "noauto" option for root device
	[  +0.128910] systemd-fstab-generator[1159]: Ignoring "noauto" option for root device
	[  +3.841948] systemd-fstab-generator[1259]: Ignoring "noauto" option for root device
	[  +0.049995] kauditd_printk_skb: 180 callbacks suppressed
	[  +2.575866] systemd-fstab-generator[1508]: Ignoring "noauto" option for root device
	[  +3.513702] systemd-fstab-generator[1689]: Ignoring "noauto" option for root device
	[  +0.052965] kauditd_printk_skb: 70 callbacks suppressed
	[Aug 5 23:21] systemd-fstab-generator[2095]: Ignoring "noauto" option for root device
	[  +0.093506] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.997559] systemd-fstab-generator[2287]: Ignoring "noauto" option for root device
	[  +0.103967] kauditd_printk_skb: 12 callbacks suppressed
	[ +16.210215] kauditd_printk_skb: 60 callbacks suppressed
	[Aug 5 23:22] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [1fdd85b796ab] <==
	{"level":"info","ts":"2024-08-05T23:21:02.190598Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T23:21:02.190621Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T23:21:02.179152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 switched to configuration voters=(16152458731666035825)"}
	{"level":"info","ts":"2024-08-05T23:21:02.190761Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","added-peer-id":"e0290fa3161c5471","added-peer-peer-urls":["https://192.169.0.13:2380"]}
	{"level":"info","ts":"2024-08-05T23:21:02.845352Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-05T23:21:02.84543Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-05T23:21:02.845462Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgPreVoteResp from e0290fa3161c5471 at term 1"}
	{"level":"info","ts":"2024-08-05T23:21:02.845512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became candidate at term 2"}
	{"level":"info","ts":"2024-08-05T23:21:02.845532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgVoteResp from e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-08-05T23:21:02.845548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became leader at term 2"}
	{"level":"info","ts":"2024-08-05T23:21:02.845562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e0290fa3161c5471 elected leader e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-08-05T23:21:02.849595Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.851787Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e0290fa3161c5471","local-member-attributes":"{Name:multinode-985000 ClientURLs:[https://192.169.0.13:2379]}","request-path":"/0/members/e0290fa3161c5471/attributes","cluster-id":"87b46e718846f146","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T23:21:02.852037Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:21:02.855611Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	{"level":"info","ts":"2024-08-05T23:21:02.856003Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.856059Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.85615Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.863221Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:21:02.86336Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T23:21:02.863406Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T23:21:02.864495Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-05T23:31:02.914901Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":684}
	{"level":"info","ts":"2024-08-05T23:31:02.918154Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":684,"took":"2.558785ms","hash":2682644219,"current-db-size-bytes":2088960,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2088960,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-08-05T23:31:02.918199Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2682644219,"revision":684,"compact-revision":-1}
	
	
	==> kernel <==
	 23:34:28 up 13 min,  0 users,  load average: 0.22, 0.12, 0.09
	Linux multinode-985000 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [724e5cfab0a2] <==
	I0805 23:32:24.989734       1 main.go:299] handling current node
	I0805 23:32:34.989491       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:32:34.989526       1 main.go:299] handling current node
	I0805 23:32:44.994445       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:32:44.994498       1 main.go:299] handling current node
	I0805 23:32:54.996022       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:32:54.996137       1 main.go:299] handling current node
	I0805 23:33:04.994884       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:33:04.994996       1 main.go:299] handling current node
	I0805 23:33:14.989401       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:33:14.989421       1 main.go:299] handling current node
	I0805 23:33:24.989307       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:33:24.989722       1 main.go:299] handling current node
	I0805 23:33:34.988932       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:33:34.989066       1 main.go:299] handling current node
	I0805 23:33:44.994912       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:33:44.995362       1 main.go:299] handling current node
	I0805 23:33:54.988562       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:33:54.988724       1 main.go:299] handling current node
	I0805 23:34:04.990678       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:34:04.991047       1 main.go:299] handling current node
	I0805 23:34:14.989462       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:34:14.989592       1 main.go:299] handling current node
	I0805 23:34:24.989135       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:34:24.989269       1 main.go:299] handling current node
	
	
	==> kube-apiserver [608878b33f35] <==
	I0805 23:21:04.097032       1 aggregator.go:165] initial CRD sync complete...
	I0805 23:21:04.097038       1 autoregister_controller.go:141] Starting autoregister controller
	I0805 23:21:04.097041       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0805 23:21:04.097046       1 cache.go:39] Caches are synced for autoregister controller
	I0805 23:21:04.110976       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0805 23:21:04.964782       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0805 23:21:04.969492       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0805 23:21:04.969592       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0805 23:21:05.293407       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0805 23:21:05.318630       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0805 23:21:05.372930       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0805 23:21:05.377089       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.13]
	I0805 23:21:05.377814       1 controller.go:615] quota admission added evaluator for: endpoints
	I0805 23:21:05.381896       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0805 23:21:06.014220       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0805 23:21:06.529594       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0805 23:21:06.534785       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0805 23:21:06.541889       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0805 23:21:20.069451       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0805 23:21:20.168118       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0805 23:34:22.712021       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52583: use of closed network connection
	E0805 23:34:23.040370       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52588: use of closed network connection
	E0805 23:34:23.352264       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52593: use of closed network connection
	E0805 23:34:26.444399       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52624: use of closed network connection
	E0805 23:34:26.631411       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52626: use of closed network connection
	
	
	==> kube-controller-manager [d11865076c64] <==
	I0805 23:21:19.437276       1 shared_informer.go:320] Caches are synced for HPA
	I0805 23:21:19.471485       1 shared_informer.go:320] Caches are synced for resource quota
	I0805 23:21:19.493007       1 shared_informer.go:320] Caches are synced for resource quota
	I0805 23:21:19.891021       1 shared_informer.go:320] Caches are synced for garbage collector
	I0805 23:21:19.917468       1 shared_informer.go:320] Caches are synced for garbage collector
	I0805 23:21:19.917792       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0805 23:21:20.414332       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="341.696199ms"
	I0805 23:21:20.435171       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="20.789887ms"
	I0805 23:21:20.453666       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.448745ms"
	I0805 23:21:20.454853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="1.144243ms"
	I0805 23:21:20.787054       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="47.481389ms"
	I0805 23:21:20.817469       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.368774ms"
	I0805 23:21:20.817550       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="43.975µs"
	I0805 23:21:35.878200       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="31.077µs"
	I0805 23:21:35.888778       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.967µs"
	I0805 23:21:37.680305       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="64.353µs"
	I0805 23:21:37.699191       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="7.51419ms"
	I0805 23:21:37.699276       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.856µs"
	I0805 23:21:39.419986       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0805 23:22:57.139604       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.652844ms"
	I0805 23:22:57.152479       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.645403ms"
	I0805 23:22:57.161837       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.312944ms"
	I0805 23:22:57.161913       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.986µs"
	I0805 23:22:59.131878       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.268042ms"
	I0805 23:22:59.132399       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.529µs"
	
	
	==> kube-proxy [d58ca48f9f8b] <==
	I0805 23:21:21.029929       1 server_linux.go:69] "Using iptables proxy"
	I0805 23:21:21.072929       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.13"]
	I0805 23:21:21.105532       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 23:21:21.105552       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 23:21:21.105563       1 server_linux.go:165] "Using iptables Proxier"
	I0805 23:21:21.107493       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 23:21:21.107594       1 server.go:872] "Version info" version="v1.30.3"
	I0805 23:21:21.107602       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:21:21.108477       1 config.go:192] "Starting service config controller"
	I0805 23:21:21.108482       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 23:21:21.108492       1 config.go:101] "Starting endpoint slice config controller"
	I0805 23:21:21.108494       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 23:21:21.108784       1 config.go:319] "Starting node config controller"
	I0805 23:21:21.108789       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 23:21:21.209420       1 shared_informer.go:320] Caches are synced for node config
	I0805 23:21:21.209474       1 shared_informer.go:320] Caches are synced for service config
	I0805 23:21:21.209501       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [792feba1a6f6] <==
	E0805 23:21:04.024310       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:04.024229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0805 23:21:04.024017       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 23:21:04.024329       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 23:21:04.024047       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 23:21:04.024362       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0805 23:21:04.024118       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 23:21:04.024431       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0805 23:21:04.860871       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:04.861069       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0805 23:21:04.959895       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0805 23:21:04.959949       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0805 23:21:04.962444       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 23:21:04.962496       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0805 23:21:04.968410       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 23:21:04.968452       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 23:21:05.030527       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0805 23:21:05.030566       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 23:21:05.076451       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:05.076659       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0805 23:21:05.118159       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:05.118676       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0805 23:21:05.141945       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 23:21:05.142020       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0805 23:21:08.218627       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 05 23:30:06 multinode-985000 kubelet[2102]: E0805 23:30:06.388840    2102 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:30:06 multinode-985000 kubelet[2102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:30:06 multinode-985000 kubelet[2102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:30:06 multinode-985000 kubelet[2102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:30:06 multinode-985000 kubelet[2102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:31:06 multinode-985000 kubelet[2102]: E0805 23:31:06.388949    2102 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:31:06 multinode-985000 kubelet[2102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:31:06 multinode-985000 kubelet[2102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:31:06 multinode-985000 kubelet[2102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:31:06 multinode-985000 kubelet[2102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:32:06 multinode-985000 kubelet[2102]: E0805 23:32:06.388091    2102 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:32:06 multinode-985000 kubelet[2102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:32:06 multinode-985000 kubelet[2102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:32:06 multinode-985000 kubelet[2102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:32:06 multinode-985000 kubelet[2102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:33:06 multinode-985000 kubelet[2102]: E0805 23:33:06.388876    2102 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:33:06 multinode-985000 kubelet[2102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:33:06 multinode-985000 kubelet[2102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:33:06 multinode-985000 kubelet[2102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:33:06 multinode-985000 kubelet[2102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:34:06 multinode-985000 kubelet[2102]: E0805 23:34:06.388016    2102 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:34:06 multinode-985000 kubelet[2102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:34:06 multinode-985000 kubelet[2102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:34:06 multinode-985000 kubelet[2102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:34:06 multinode-985000 kubelet[2102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [3d9fd612d0b1] <==
	I0805 23:21:36.824264       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0805 23:21:36.839328       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0805 23:21:36.841986       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0805 23:21:36.851899       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0805 23:21:36.852326       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-985000_20a8683f-3aa0-4f0f-a016-73ecb7148b29!
	I0805 23:21:36.851925       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1cf31f72-12b6-4b0c-b90e-6ea19cb3d50f", APIVersion:"v1", ResourceVersion:"436", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-985000_20a8683f-3aa0-4f0f-a016-73ecb7148b29 became leader
	I0805 23:21:36.952695       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-985000_20a8683f-3aa0-4f0f-a016-73ecb7148b29!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-985000 -n multinode-985000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-985000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-ptd5b
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-985000 describe pod busybox-fc5497c4f-ptd5b
helpers_test.go:282: (dbg) kubectl --context multinode-985000 describe pod busybox-fc5497c4f-ptd5b:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-ptd5b
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x2xz9 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-x2xz9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  81s (x3 over 11m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.20s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-985000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-985000 -v 3 --alsologtostderr: (44.769902217s)
multinode_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 status --alsologtostderr
multinode_test.go:127: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-985000 status --alsologtostderr: exit status 2 (320.531805ms)

                                                
                                                
-- stdout --
	multinode-985000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-985000-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-985000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:35:13.994691    5280 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:35:13.995368    5280 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:35:13.995376    5280 out.go:304] Setting ErrFile to fd 2...
	I0805 16:35:13.995382    5280 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:35:13.995901    5280 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:35:13.996111    5280 out.go:298] Setting JSON to false
	I0805 16:35:13.996134    5280 mustload.go:65] Loading cluster: multinode-985000
	I0805 16:35:13.996168    5280 notify.go:220] Checking for updates...
	I0805 16:35:13.996448    5280 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:35:13.996464    5280 status.go:255] checking status of multinode-985000 ...
	I0805 16:35:13.996826    5280 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:35:13.996875    5280 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:35:14.005869    5280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52693
	I0805 16:35:14.006208    5280 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:35:14.006597    5280 main.go:141] libmachine: Using API Version  1
	I0805 16:35:14.006606    5280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:35:14.006864    5280 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:35:14.006970    5280 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:35:14.007071    5280 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:35:14.007150    5280 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:35:14.008089    5280 status.go:330] multinode-985000 host status = "Running" (err=<nil>)
	I0805 16:35:14.008111    5280 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:35:14.008355    5280 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:35:14.008375    5280 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:35:14.016646    5280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52695
	I0805 16:35:14.016959    5280 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:35:14.017314    5280 main.go:141] libmachine: Using API Version  1
	I0805 16:35:14.017333    5280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:35:14.017526    5280 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:35:14.017622    5280 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:35:14.017706    5280 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:35:14.017958    5280 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:35:14.017998    5280 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:35:14.028894    5280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52697
	I0805 16:35:14.029232    5280 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:35:14.029551    5280 main.go:141] libmachine: Using API Version  1
	I0805 16:35:14.029559    5280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:35:14.029757    5280 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:35:14.029861    5280 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:35:14.029996    5280 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:35:14.030018    5280 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:35:14.030108    5280 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:35:14.030184    5280 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:35:14.030260    5280 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:35:14.030344    5280 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:35:14.060386    5280 ssh_runner.go:195] Run: systemctl --version
	I0805 16:35:14.064762    5280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:35:14.084073    5280 kubeconfig.go:125] found "multinode-985000" server: "https://192.169.0.13:8443"
	I0805 16:35:14.084100    5280 api_server.go:166] Checking apiserver status ...
	I0805 16:35:14.084143    5280 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:35:14.099408    5280 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1977/cgroup
	W0805 16:35:14.106894    5280 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1977/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 16:35:14.106939    5280 ssh_runner.go:195] Run: ls
	I0805 16:35:14.110233    5280 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:35:14.113250    5280 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0805 16:35:14.113261    5280 status.go:422] multinode-985000 apiserver status = Running (err=<nil>)
	I0805 16:35:14.113272    5280 status.go:257] multinode-985000 status: &{Name:multinode-985000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:35:14.113283    5280 status.go:255] checking status of multinode-985000-m02 ...
	I0805 16:35:14.113537    5280 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:35:14.113559    5280 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:35:14.122426    5280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52701
	I0805 16:35:14.122761    5280 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:35:14.123097    5280 main.go:141] libmachine: Using API Version  1
	I0805 16:35:14.123115    5280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:35:14.123321    5280 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:35:14.123430    5280 main.go:141] libmachine: (multinode-985000-m02) Calling .GetState
	I0805 16:35:14.123511    5280 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:35:14.123595    5280 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:35:14.124550    5280 status.go:330] multinode-985000-m02 host status = "Running" (err=<nil>)
	I0805 16:35:14.124560    5280 host.go:66] Checking if "multinode-985000-m02" exists ...
	I0805 16:35:14.124813    5280 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:35:14.124838    5280 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:35:14.133559    5280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52703
	I0805 16:35:14.133884    5280 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:35:14.134197    5280 main.go:141] libmachine: Using API Version  1
	I0805 16:35:14.134208    5280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:35:14.134418    5280 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:35:14.134534    5280 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:35:14.134618    5280 host.go:66] Checking if "multinode-985000-m02" exists ...
	I0805 16:35:14.134863    5280 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:35:14.134896    5280 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:35:14.143274    5280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52705
	I0805 16:35:14.143612    5280 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:35:14.143920    5280 main.go:141] libmachine: Using API Version  1
	I0805 16:35:14.143928    5280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:35:14.144130    5280 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:35:14.144235    5280 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:35:14.144371    5280 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:35:14.144383    5280 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:35:14.144461    5280 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:35:14.144570    5280 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:35:14.144661    5280 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:35:14.144740    5280 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:35:14.178749    5280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:35:14.190247    5280 status.go:257] multinode-985000-m02 status: &{Name:multinode-985000-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:35:14.190263    5280 status.go:255] checking status of multinode-985000-m03 ...
	I0805 16:35:14.190564    5280 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:35:14.190585    5280 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:35:14.199237    5280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52708
	I0805 16:35:14.199568    5280 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:35:14.199913    5280 main.go:141] libmachine: Using API Version  1
	I0805 16:35:14.199933    5280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:35:14.200118    5280 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:35:14.200207    5280 main.go:141] libmachine: (multinode-985000-m03) Calling .GetState
	I0805 16:35:14.200295    5280 main.go:141] libmachine: (multinode-985000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:35:14.200380    5280 main.go:141] libmachine: (multinode-985000-m03) DBG | hyperkit pid from json: 5266
	I0805 16:35:14.201337    5280 status.go:330] multinode-985000-m03 host status = "Running" (err=<nil>)
	I0805 16:35:14.201345    5280 host.go:66] Checking if "multinode-985000-m03" exists ...
	I0805 16:35:14.201616    5280 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:35:14.201647    5280 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:35:14.210029    5280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52710
	I0805 16:35:14.210354    5280 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:35:14.210691    5280 main.go:141] libmachine: Using API Version  1
	I0805 16:35:14.210709    5280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:35:14.210926    5280 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:35:14.211034    5280 main.go:141] libmachine: (multinode-985000-m03) Calling .GetIP
	I0805 16:35:14.211125    5280 host.go:66] Checking if "multinode-985000-m03" exists ...
	I0805 16:35:14.211382    5280 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:35:14.211424    5280 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:35:14.219729    5280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52712
	I0805 16:35:14.220049    5280 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:35:14.220393    5280 main.go:141] libmachine: Using API Version  1
	I0805 16:35:14.220412    5280 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:35:14.220607    5280 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:35:14.220705    5280 main.go:141] libmachine: (multinode-985000-m03) Calling .DriverName
	I0805 16:35:14.220822    5280 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:35:14.220833    5280 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHHostname
	I0805 16:35:14.220918    5280 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHPort
	I0805 16:35:14.221007    5280 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHKeyPath
	I0805 16:35:14.221099    5280 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHUsername
	I0805 16:35:14.221180    5280 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m03/id_rsa Username:docker}
	I0805 16:35:14.249154    5280 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:35:14.260216    5280 status.go:257] multinode-985000-m03 status: &{Name:multinode-985000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:129: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-985000 status --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000
helpers_test.go:244: <<< TestMultiNode/serial/AddNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/AddNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-985000 logs -n 25: (1.950257548s)
helpers_test.go:252: TestMultiNode/serial/AddNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| start   | -p multinode-985000                               | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:20 PDT |                     |
	|         | --wait=true --memory=2200                         |                  |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                  |         |         |                     |                     |
	|         | --alsologtostderr                                 |                  |         |         |                     |                     |
	|         | --driver=hyperkit                                 |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- apply -f                   | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:22 PDT | 05 Aug 24 16:22 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- rollout                    | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:22 PDT |                     |
	|         | status deployment/busybox                         |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:32 PDT | 05 Aug 24 16:32 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g --                        |                  |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT |                     |
	|         | busybox-fc5497c4f-ptd5b --                        |                  |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g --                        |                  |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT |                     |
	|         | busybox-fc5497c4f-ptd5b --                        |                  |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g -- nslookup               |                  |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT |                     |
	|         | busybox-fc5497c4f-ptd5b -- nslookup               |                  |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g                           |                  |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g -- sh                     |                  |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1                          |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT |                     |
	|         | busybox-fc5497c4f-ptd5b                           |                  |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |         |         |                     |                     |
	| node    | add -p multinode-985000 -v 3                      | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:35 PDT |
	|         | --alsologtostderr                                 |                  |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 16:20:32
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 16:20:32.303800    4640 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:20:32.303980    4640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:20:32.303986    4640 out.go:304] Setting ErrFile to fd 2...
	I0805 16:20:32.303990    4640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:20:32.304163    4640 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:20:32.305609    4640 out.go:298] Setting JSON to false
	I0805 16:20:32.329307    4640 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3003,"bootTime":1722897029,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0805 16:20:32.329400    4640 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:20:32.351877    4640 out.go:177] * [multinode-985000] minikube v1.33.1 on Darwin 14.5
	I0805 16:20:32.392940    4640 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:20:32.393020    4640 notify.go:220] Checking for updates...
	I0805 16:20:32.435775    4640 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:20:32.456783    4640 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0805 16:20:32.477872    4640 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:20:32.499010    4640 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:20:32.519936    4640 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:20:32.541363    4640 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:20:32.571784    4640 out.go:177] * Using the hyperkit driver based on user configuration
	I0805 16:20:32.613992    4640 start.go:297] selected driver: hyperkit
	I0805 16:20:32.614020    4640 start.go:901] validating driver "hyperkit" against <nil>
	I0805 16:20:32.614042    4640 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:20:32.618322    4640 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:20:32.618456    4640 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19373-1122/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0805 16:20:32.627075    4640 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0805 16:20:32.631391    4640 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:20:32.631417    4640 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0805 16:20:32.631452    4640 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 16:20:32.631678    4640 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:20:32.631709    4640 cni.go:84] Creating CNI manager for ""
	I0805 16:20:32.631719    4640 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0805 16:20:32.631730    4640 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0805 16:20:32.631823    4640 start.go:340] cluster config:
	{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:20:32.631925    4640 iso.go:125] acquiring lock: {Name:mk71e8d40232ece83c91dc82184f03ab93aee56e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:20:32.673756    4640 out.go:177] * Starting "multinode-985000" primary control-plane node in "multinode-985000" cluster
	I0805 16:20:32.695001    4640 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:20:32.695088    4640 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0805 16:20:32.695107    4640 cache.go:56] Caching tarball of preloaded images
	I0805 16:20:32.695319    4640 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:20:32.695338    4640 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:20:32.695809    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:20:32.695848    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json: {Name:mk470c2e849a0c86ee251e86e74d9f6dfdb47dad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:32.696485    4640 start.go:360] acquireMachinesLock for multinode-985000: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:20:32.696593    4640 start.go:364] duration metric: took 88.666µs to acquireMachinesLock for "multinode-985000"
	I0805 16:20:32.696646    4640 start.go:93] Provisioning new machine with config: &{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:20:32.696745    4640 start.go:125] createHost starting for "" (driver="hyperkit")
	I0805 16:20:32.718059    4640 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 16:20:32.718351    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:20:32.718416    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:20:32.728195    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52477
	I0805 16:20:32.728547    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:20:32.728938    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:20:32.728948    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:20:32.729147    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:20:32.729251    4640 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:20:32.729369    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:32.729498    4640 start.go:159] libmachine.API.Create for "multinode-985000" (driver="hyperkit")
	I0805 16:20:32.729521    4640 client.go:168] LocalClient.Create starting
	I0805 16:20:32.729556    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem
	I0805 16:20:32.729608    4640 main.go:141] libmachine: Decoding PEM data...
	I0805 16:20:32.729625    4640 main.go:141] libmachine: Parsing certificate...
	I0805 16:20:32.729685    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem
	I0805 16:20:32.729724    4640 main.go:141] libmachine: Decoding PEM data...
	I0805 16:20:32.729737    4640 main.go:141] libmachine: Parsing certificate...
	I0805 16:20:32.729749    4640 main.go:141] libmachine: Running pre-create checks...
	I0805 16:20:32.729760    4640 main.go:141] libmachine: (multinode-985000) Calling .PreCreateCheck
	I0805 16:20:32.729840    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:32.729974    4640 main.go:141] libmachine: (multinode-985000) Calling .GetConfigRaw
	I0805 16:20:32.739224    4640 main.go:141] libmachine: Creating machine...
	I0805 16:20:32.739247    4640 main.go:141] libmachine: (multinode-985000) Calling .Create
	I0805 16:20:32.739475    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:32.739754    4640 main.go:141] libmachine: (multinode-985000) DBG | I0805 16:20:32.739457    4648 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:20:32.739852    4640 main.go:141] libmachine: (multinode-985000) Downloading /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1122/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 16:20:32.920622    4640 main.go:141] libmachine: (multinode-985000) DBG | I0805 16:20:32.920524    4648 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa...
	I0805 16:20:32.957084    4640 main.go:141] libmachine: (multinode-985000) DBG | I0805 16:20:32.957005    4648 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/multinode-985000.rawdisk...
	I0805 16:20:32.957123    4640 main.go:141] libmachine: (multinode-985000) DBG | Writing magic tar header
	I0805 16:20:32.957134    4640 main.go:141] libmachine: (multinode-985000) DBG | Writing SSH key tar header
	I0805 16:20:32.957531    4640 main.go:141] libmachine: (multinode-985000) DBG | I0805 16:20:32.957490    4648 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000 ...
	I0805 16:20:33.331110    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:33.331140    4640 main.go:141] libmachine: (multinode-985000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid
	I0805 16:20:33.331159    4640 main.go:141] libmachine: (multinode-985000) DBG | Using UUID 3ac698fc-f622-443b-898d-9b152fa64288
	I0805 16:20:33.442582    4640 main.go:141] libmachine: (multinode-985000) DBG | Generated MAC e2:6:14:d2:13:ae
	I0805 16:20:33.442603    4640 main.go:141] libmachine: (multinode-985000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000
	I0805 16:20:33.442636    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3ac698fc-f622-443b-898d-9b152fa64288", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0805 16:20:33.442669    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3ac698fc-f622-443b-898d-9b152fa64288", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0805 16:20:33.442719    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "3ac698fc-f622-443b-898d-9b152fa64288", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/multinode-985000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage,/Users/jenkins/minikube-integration/1937
3-1122/.minikube/machines/multinode-985000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"}
	I0805 16:20:33.442758    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 3ac698fc-f622-443b-898d-9b152fa64288 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/multinode-985000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"
	I0805 16:20:33.442774    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:20:33.445733    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: Pid is 4651
	I0805 16:20:33.446145    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 0
	I0805 16:20:33.446167    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:33.446227    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:33.447073    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:33.447135    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:33.447152    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:33.447186    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:33.447202    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:33.447214    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:33.447222    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:33.447229    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:33.447247    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:33.447269    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:33.447287    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:33.447304    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:33.447321    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:33.453446    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:20:33.506623    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:20:33.507268    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:20:33.507283    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:20:33.507290    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:20:33.507298    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:20:33.891346    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:20:33.891387    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:20:34.006163    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:20:34.006177    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:20:34.006189    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:20:34.006208    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:20:34.007050    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:20:34.007082    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:20:35.448624    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 1
	I0805 16:20:35.448640    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:35.448724    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:35.449516    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:35.449591    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:35.449607    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:35.449619    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:35.449625    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:35.449648    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:35.449664    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:35.449695    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:35.449711    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:35.449719    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:35.449725    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:35.449731    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:35.449738    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:37.449834    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 2
	I0805 16:20:37.449851    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:37.449867    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:37.450676    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:37.450690    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:37.450697    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:37.450707    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:37.450722    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:37.450733    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:37.450744    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:37.450754    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:37.450771    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:37.450784    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:37.450797    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:37.450809    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:37.450819    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:39.451161    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 3
	I0805 16:20:39.451179    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:39.451277    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:39.452025    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:39.452066    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:39.452089    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:39.452104    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:39.452124    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:39.452141    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:39.452154    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:39.452161    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:39.452167    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:39.452183    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:39.452195    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:39.452202    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:39.452211    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:39.592041    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:39 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:20:39.592070    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:39 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:20:39.592076    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:39 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:20:39.615760    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:39 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:20:41.452210    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 4
	I0805 16:20:41.452225    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:41.452325    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:41.453101    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:41.453153    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:41.453162    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:41.453169    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:41.453178    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:41.453187    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:41.453194    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:41.453200    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:41.453219    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:41.453231    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:41.453241    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:41.453250    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:41.453258    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:43.455148    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 5
	I0805 16:20:43.455166    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:43.455244    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:43.456059    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:43.456103    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:20:43.456115    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:20:43.456122    4640 main.go:141] libmachine: (multinode-985000) DBG | Found match: e2:6:14:d2:13:ae
	I0805 16:20:43.456127    4640 main.go:141] libmachine: (multinode-985000) DBG | IP: 192.169.0.13
	I0805 16:20:43.456181    4640 main.go:141] libmachine: (multinode-985000) Calling .GetConfigRaw
	I0805 16:20:43.456781    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:43.456879    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:43.456972    4640 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 16:20:43.456985    4640 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:20:43.457082    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:43.457144    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:43.457907    4640 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 16:20:43.457917    4640 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 16:20:43.457923    4640 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 16:20:43.457927    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:43.458023    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:43.458126    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:43.458255    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:43.458346    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:43.458472    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:43.458676    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:43.458683    4640 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 16:20:44.513424    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:20:44.513443    4640 main.go:141] libmachine: Detecting the provisioner...
	I0805 16:20:44.513452    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:44.513594    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:44.513694    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.513791    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.513876    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:44.513996    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:44.514158    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:44.514165    4640 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 16:20:44.573082    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 16:20:44.573142    4640 main.go:141] libmachine: found compatible host: buildroot
	I0805 16:20:44.573149    4640 main.go:141] libmachine: Provisioning with buildroot...
	I0805 16:20:44.573155    4640 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:20:44.573299    4640 buildroot.go:166] provisioning hostname "multinode-985000"
	I0805 16:20:44.573311    4640 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:20:44.573416    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:44.573499    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:44.573585    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.573680    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.573795    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:44.573922    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:44.574068    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:44.574076    4640 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-985000 && echo "multinode-985000" | sudo tee /etc/hostname
	I0805 16:20:44.637872    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-985000
	
	I0805 16:20:44.637892    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:44.638029    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:44.638132    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.638218    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.638297    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:44.638429    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:44.638562    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:44.638582    4640 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-985000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-985000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-985000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:20:44.698340    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:20:44.698360    4640 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:20:44.698377    4640 buildroot.go:174] setting up certificates
	I0805 16:20:44.698389    4640 provision.go:84] configureAuth start
	I0805 16:20:44.698397    4640 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:20:44.698544    4640 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:20:44.698658    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:44.698750    4640 provision.go:143] copyHostCerts
	I0805 16:20:44.698781    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:20:44.698850    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:20:44.698858    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:20:44.699001    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:20:44.699205    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:20:44.699246    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:20:44.699250    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:20:44.699341    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:20:44.699482    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:20:44.699528    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:20:44.699533    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:20:44.699615    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:20:44.699756    4640 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.multinode-985000 san=[127.0.0.1 192.169.0.13 localhost minikube multinode-985000]
	I0805 16:20:45.028860    4640 provision.go:177] copyRemoteCerts
	I0805 16:20:45.028920    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:20:45.028938    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:45.029080    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:45.029180    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.029338    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:45.029452    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:20:45.063652    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:20:45.063724    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:20:45.083743    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:20:45.083800    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0805 16:20:45.103791    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:20:45.103863    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:20:45.123716    4640 provision.go:87] duration metric: took 425.312704ms to configureAuth
	I0805 16:20:45.123731    4640 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:20:45.123881    4640 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:20:45.123894    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:45.124028    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:45.124115    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:45.124206    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.124285    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.124381    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:45.124503    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:45.124632    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:45.124639    4640 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:20:45.176256    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:20:45.176269    4640 buildroot.go:70] root file system type: tmpfs
	I0805 16:20:45.176337    4640 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:20:45.176350    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:45.176482    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:45.176580    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.176695    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.176782    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:45.176911    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:45.177045    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:45.177090    4640 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:20:45.240992    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:20:45.241023    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:45.241166    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:45.241270    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.241382    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.241469    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:45.241590    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:45.241743    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:45.241755    4640 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:20:46.765402    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:20:46.765418    4640 main.go:141] libmachine: Checking connection to Docker...
	I0805 16:20:46.765424    4640 main.go:141] libmachine: (multinode-985000) Calling .GetURL
	I0805 16:20:46.765563    4640 main.go:141] libmachine: Docker is up and running!
	I0805 16:20:46.765570    4640 main.go:141] libmachine: Reticulating splines...
	I0805 16:20:46.765575    4640 client.go:171] duration metric: took 14.036043683s to LocalClient.Create
	I0805 16:20:46.765592    4640 start.go:167] duration metric: took 14.036090848s to libmachine.API.Create "multinode-985000"
	I0805 16:20:46.765602    4640 start.go:293] postStartSetup for "multinode-985000" (driver="hyperkit")
	I0805 16:20:46.765609    4640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:20:46.765620    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.765765    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:20:46.765778    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:46.765878    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:46.765972    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.766070    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:46.766168    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:20:46.808597    4640 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:20:46.814840    4640 command_runner.go:130] > NAME=Buildroot
	I0805 16:20:46.814852    4640 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0805 16:20:46.814856    4640 command_runner.go:130] > ID=buildroot
	I0805 16:20:46.814869    4640 command_runner.go:130] > VERSION_ID=2023.02.9
	I0805 16:20:46.814873    4640 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0805 16:20:46.814969    4640 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:20:46.814985    4640 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:20:46.815099    4640 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:20:46.815290    4640 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:20:46.815297    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:20:46.815526    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:20:46.832473    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:20:46.852626    4640 start.go:296] duration metric: took 87.015317ms for postStartSetup
	I0805 16:20:46.852653    4640 main.go:141] libmachine: (multinode-985000) Calling .GetConfigRaw
	I0805 16:20:46.853264    4640 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:20:46.853417    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:20:46.853762    4640 start.go:128] duration metric: took 14.156998155s to createHost
	I0805 16:20:46.853776    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:46.853870    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:46.853964    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.854078    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.854160    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:46.854284    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:46.854405    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:46.854413    4640 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 16:20:46.906137    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722900047.071906799
	
	I0805 16:20:46.906149    4640 fix.go:216] guest clock: 1722900047.071906799
	I0805 16:20:46.906154    4640 fix.go:229] Guest: 2024-08-05 16:20:47.071906799 -0700 PDT Remote: 2024-08-05 16:20:46.85377 -0700 PDT m=+14.585721958 (delta=218.136799ms)
	I0805 16:20:46.906178    4640 fix.go:200] guest clock delta is within tolerance: 218.136799ms
	I0805 16:20:46.906182    4640 start.go:83] releasing machines lock for "multinode-985000", held for 14.209573761s
	I0805 16:20:46.906200    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.906321    4640 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:20:46.906429    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.906734    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.906832    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.906917    4640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:20:46.906947    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:46.906977    4640 ssh_runner.go:195] Run: cat /version.json
	I0805 16:20:46.906987    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:46.907036    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:46.907080    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:46.907105    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.907167    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.907190    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:46.907251    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:46.907285    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:20:46.907353    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:20:46.936969    4640 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0805 16:20:46.937263    4640 ssh_runner.go:195] Run: systemctl --version
	I0805 16:20:46.992747    4640 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0805 16:20:46.993626    4640 command_runner.go:130] > systemd 252 (252)
	I0805 16:20:46.993660    4640 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0805 16:20:46.993799    4640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 16:20:46.998949    4640 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0805 16:20:46.998969    4640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:20:46.999002    4640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:20:47.012276    4640 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0805 16:20:47.012544    4640 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:20:47.012556    4640 start.go:495] detecting cgroup driver to use...
	I0805 16:20:47.012657    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:20:47.027593    4640 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0805 16:20:47.027660    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:20:47.035836    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:20:47.044911    4640 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:20:47.044968    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:20:47.053571    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:20:47.061858    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:20:47.070031    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:20:47.078524    4640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:20:47.087870    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:20:47.096303    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:20:47.104482    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:20:47.112756    4640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:20:47.120033    4640 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0805 16:20:47.120127    4640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:20:47.128644    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:47.220387    4640 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:20:47.239567    4640 start.go:495] detecting cgroup driver to use...
	I0805 16:20:47.239642    4640 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:20:47.254939    4640 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0805 16:20:47.255001    4640 command_runner.go:130] > [Unit]
	I0805 16:20:47.255011    4640 command_runner.go:130] > Description=Docker Application Container Engine
	I0805 16:20:47.255015    4640 command_runner.go:130] > Documentation=https://docs.docker.com
	I0805 16:20:47.255020    4640 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0805 16:20:47.255026    4640 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0805 16:20:47.255030    4640 command_runner.go:130] > StartLimitBurst=3
	I0805 16:20:47.255034    4640 command_runner.go:130] > StartLimitIntervalSec=60
	I0805 16:20:47.255037    4640 command_runner.go:130] > [Service]
	I0805 16:20:47.255041    4640 command_runner.go:130] > Type=notify
	I0805 16:20:47.255055    4640 command_runner.go:130] > Restart=on-failure
	I0805 16:20:47.255063    4640 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0805 16:20:47.255073    4640 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0805 16:20:47.255080    4640 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0805 16:20:47.255088    4640 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0805 16:20:47.255094    4640 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0805 16:20:47.255099    4640 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0805 16:20:47.255112    4640 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0805 16:20:47.255120    4640 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0805 16:20:47.255128    4640 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0805 16:20:47.255134    4640 command_runner.go:130] > ExecStart=
	I0805 16:20:47.255164    4640 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0805 16:20:47.255172    4640 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0805 16:20:47.255182    4640 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0805 16:20:47.255189    4640 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0805 16:20:47.255193    4640 command_runner.go:130] > LimitNOFILE=infinity
	I0805 16:20:47.255196    4640 command_runner.go:130] > LimitNPROC=infinity
	I0805 16:20:47.255200    4640 command_runner.go:130] > LimitCORE=infinity
	I0805 16:20:47.255205    4640 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0805 16:20:47.255209    4640 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0805 16:20:47.255212    4640 command_runner.go:130] > TasksMax=infinity
	I0805 16:20:47.255215    4640 command_runner.go:130] > TimeoutStartSec=0
	I0805 16:20:47.255220    4640 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0805 16:20:47.255225    4640 command_runner.go:130] > Delegate=yes
	I0805 16:20:47.255230    4640 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0805 16:20:47.255233    4640 command_runner.go:130] > KillMode=process
	I0805 16:20:47.255236    4640 command_runner.go:130] > [Install]
	I0805 16:20:47.255259    4640 command_runner.go:130] > WantedBy=multi-user.target
	I0805 16:20:47.255324    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:20:47.269909    4640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:20:47.286027    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:20:47.296365    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:20:47.306405    4640 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:20:47.369760    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:20:47.379998    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:20:47.394696    4640 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0805 16:20:47.394951    4640 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:20:47.397850    4640 command_runner.go:130] > /usr/bin/cri-dockerd
	I0805 16:20:47.398038    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:20:47.406063    4640 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:20:47.419537    4640 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:20:47.514227    4640 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:20:47.637079    4640 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:20:47.637156    4640 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:20:47.651314    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:47.748259    4640 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:20:50.076345    4640 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.32806615s)
	I0805 16:20:50.076407    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0805 16:20:50.086580    4640 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0805 16:20:50.099944    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:20:50.110410    4640 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0805 16:20:50.206329    4640 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0805 16:20:50.317239    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:50.417670    4640 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0805 16:20:50.431617    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:20:50.443305    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:50.555307    4640 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0805 16:20:50.610408    4640 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0805 16:20:50.610481    4640 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0805 16:20:50.614751    4640 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0805 16:20:50.614762    4640 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0805 16:20:50.614767    4640 command_runner.go:130] > Device: 0,22	Inode: 806         Links: 1
	I0805 16:20:50.614772    4640 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0805 16:20:50.614775    4640 command_runner.go:130] > Access: 2024-08-05 23:20:50.735793184 +0000
	I0805 16:20:50.614784    4640 command_runner.go:130] > Modify: 2024-08-05 23:20:50.735793184 +0000
	I0805 16:20:50.614789    4640 command_runner.go:130] > Change: 2024-08-05 23:20:50.736793062 +0000
	I0805 16:20:50.614792    4640 command_runner.go:130] >  Birth: -
	I0805 16:20:50.614829    4640 start.go:563] Will wait 60s for crictl version
	I0805 16:20:50.614890    4640 ssh_runner.go:195] Run: which crictl
	I0805 16:20:50.617807    4640 command_runner.go:130] > /usr/bin/crictl
	I0805 16:20:50.617933    4640 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 16:20:50.644026    4640 command_runner.go:130] > Version:  0.1.0
	I0805 16:20:50.644070    4640 command_runner.go:130] > RuntimeName:  docker
	I0805 16:20:50.644117    4640 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0805 16:20:50.644195    4640 command_runner.go:130] > RuntimeApiVersion:  v1
	I0805 16:20:50.645396    4640 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0805 16:20:50.645460    4640 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:20:50.661131    4640 command_runner.go:130] > 27.1.1
	I0805 16:20:50.662194    4640 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:20:50.677860    4640 command_runner.go:130] > 27.1.1
	I0805 16:20:50.700872    4640 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0805 16:20:50.700922    4640 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:20:50.701316    4640 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0805 16:20:50.706154    4640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:20:50.715610    4640 kubeadm.go:883] updating cluster {Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 16:20:50.715677    4640 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:20:50.715736    4640 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:20:50.733572    4640 docker.go:685] Got preloaded images: 
	I0805 16:20:50.733584    4640 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0805 16:20:50.733634    4640 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 16:20:50.741005    4640 command_runner.go:139] > {"Repositories":{}}
	I0805 16:20:50.741090    4640 ssh_runner.go:195] Run: which lz4
	I0805 16:20:50.744527    4640 command_runner.go:130] > /usr/bin/lz4
	I0805 16:20:50.744558    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0805 16:20:50.744692    4640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 16:20:50.747718    4640 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 16:20:50.747836    4640 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 16:20:50.747851    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0805 16:20:51.865752    4640 docker.go:649] duration metric: took 1.121114736s to copy over tarball
	I0805 16:20:51.865833    4640 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 16:20:54.241811    4640 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.375959074s)
	I0805 16:20:54.241825    4640 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 16:20:54.267125    4640 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 16:20:54.275283    4640 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.3":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.3":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.3":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d2
89d99da794784d1"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.3":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0805 16:20:54.275373    4640 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0805 16:20:54.288931    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:54.386395    4640 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:20:56.795159    4640 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.408741228s)
	I0805 16:20:56.795248    4640 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:20:56.808093    4640 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0805 16:20:56.808107    4640 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0805 16:20:56.808111    4640 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0805 16:20:56.808116    4640 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0805 16:20:56.808120    4640 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0805 16:20:56.808123    4640 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0805 16:20:56.808128    4640 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0805 16:20:56.808135    4640 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:20:56.809018    4640 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0805 16:20:56.809035    4640 cache_images.go:84] Images are preloaded, skipping loading
	I0805 16:20:56.809048    4640 kubeadm.go:934] updating node { 192.169.0.13 8443 v1.30.3 docker true true} ...
	I0805 16:20:56.809127    4640 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-985000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 16:20:56.809195    4640 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0805 16:20:56.847007    4640 command_runner.go:130] > cgroupfs
	I0805 16:20:56.847610    4640 cni.go:84] Creating CNI manager for ""
	I0805 16:20:56.847620    4640 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 16:20:56.847630    4640 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 16:20:56.847650    4640 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.13 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-985000 NodeName:multinode-985000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 16:20:56.847744    4640 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-985000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 16:20:56.847807    4640 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 16:20:56.855919    4640 command_runner.go:130] > kubeadm
	I0805 16:20:56.855931    4640 command_runner.go:130] > kubectl
	I0805 16:20:56.855934    4640 command_runner.go:130] > kubelet
	I0805 16:20:56.855959    4640 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 16:20:56.856010    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 16:20:56.863284    4640 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0805 16:20:56.876753    4640 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 16:20:56.890292    4640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0805 16:20:56.904628    4640 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I0805 16:20:56.907711    4640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:20:56.917108    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:57.013172    4640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:20:57.028650    4640 certs.go:68] Setting up /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000 for IP: 192.169.0.13
	I0805 16:20:57.028663    4640 certs.go:194] generating shared ca certs ...
	I0805 16:20:57.028674    4640 certs.go:226] acquiring lock for ca certs: {Name:mkb83e058d89c7d4e66f4136f377a3c305b13735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.028863    4640 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key
	I0805 16:20:57.028935    4640 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key
	I0805 16:20:57.028946    4640 certs.go:256] generating profile certs ...
	I0805 16:20:57.028995    4640 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key
	I0805 16:20:57.029007    4640 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt with IP's: []
	I0805 16:20:57.088127    4640 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt ...
	I0805 16:20:57.088142    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt: {Name:mkb7087fa165ae496621b10df42dfd2f8603360a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.088531    4640 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key ...
	I0805 16:20:57.088540    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key: {Name:mk37e627de9c39a2300d317d721ebf92a202a17e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.088775    4640 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec
	I0805 16:20:57.088790    4640 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt.5b7978ec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.13]
	I0805 16:20:57.189318    4640 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt.5b7978ec ...
	I0805 16:20:57.189336    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt.5b7978ec: {Name:mkb4501af4f6db766eb719de2f42fc564a23d2d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.189653    4640 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec ...
	I0805 16:20:57.189669    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec: {Name:mke641ddecfc5629bb592a5b6321d446ed3b31bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.189903    4640 certs.go:381] copying /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt.5b7978ec -> /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt
	I0805 16:20:57.190140    4640 certs.go:385] copying /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec -> /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key
	I0805 16:20:57.190318    4640 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key
	I0805 16:20:57.190336    4640 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt with IP's: []
	I0805 16:20:57.386717    4640 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt ...
	I0805 16:20:57.386733    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt: {Name:mk486344c8c5b8383e5349f68a995b553e8d31c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.387043    4640 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key ...
	I0805 16:20:57.387052    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key: {Name:mk2b24e1a5e962e12395adf21e4f6ad64901ee0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.387278    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 16:20:57.387306    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 16:20:57.387325    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 16:20:57.387349    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 16:20:57.387368    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 16:20:57.387391    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 16:20:57.387411    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 16:20:57.387432    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 16:20:57.387531    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem (1338 bytes)
	W0805 16:20:57.387583    4640 certs.go:480] ignoring /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678_empty.pem, impossibly tiny 0 bytes
	I0805 16:20:57.387591    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 16:20:57.387621    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem (1082 bytes)
	I0805 16:20:57.387656    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem (1123 bytes)
	I0805 16:20:57.387684    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem (1675 bytes)
	I0805 16:20:57.387747    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:20:57.387781    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem -> /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.387803    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.387822    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.388188    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 16:20:57.408800    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 16:20:57.429927    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 16:20:57.449924    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0805 16:20:57.470736    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0805 16:20:57.490564    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 16:20:57.511342    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 16:20:57.531190    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 16:20:57.551984    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem --> /usr/share/ca-certificates/1678.pem (1338 bytes)
	I0805 16:20:57.571601    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /usr/share/ca-certificates/16782.pem (1708 bytes)
	I0805 16:20:57.592369    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 16:20:57.611866    4640 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 16:20:57.626527    4640 ssh_runner.go:195] Run: openssl version
	I0805 16:20:57.630504    4640 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0805 16:20:57.630711    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1678.pem && ln -fs /usr/share/ca-certificates/1678.pem /etc/ssl/certs/1678.pem"
	I0805 16:20:57.638913    4640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.642115    4640 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  5 22:58 /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.642280    4640 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 22:58 /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.642315    4640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.646345    4640 command_runner.go:130] > 51391683
	I0805 16:20:57.646544    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1678.pem /etc/ssl/certs/51391683.0"
	I0805 16:20:57.654953    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16782.pem && ln -fs /usr/share/ca-certificates/16782.pem /etc/ssl/certs/16782.pem"
	I0805 16:20:57.663842    4640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.667242    4640 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  5 22:58 /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.667258    4640 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 22:58 /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.667300    4640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.671438    4640 command_runner.go:130] > 3ec20f2e
	I0805 16:20:57.671648    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16782.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 16:20:57.679692    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 16:20:57.688061    4640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.691411    4640 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.691493    4640 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.691531    4640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.695572    4640 command_runner.go:130] > b5213941
	I0805 16:20:57.695754    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 16:20:57.704703    4640 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 16:20:57.707752    4640 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 16:20:57.707872    4640 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 16:20:57.707921    4640 kubeadm.go:392] StartCluster: {Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:20:57.708054    4640 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 16:20:57.720408    4640 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 16:20:57.731114    4640 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0805 16:20:57.731128    4640 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0805 16:20:57.731133    4640 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0805 16:20:57.731194    4640 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 16:20:57.739645    4640 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 16:20:57.751095    4640 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0805 16:20:57.751108    4640 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0805 16:20:57.751113    4640 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0805 16:20:57.751120    4640 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 16:20:57.751266    4640 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 16:20:57.751273    4640 kubeadm.go:157] found existing configuration files:
	
	I0805 16:20:57.751324    4640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 16:20:57.759086    4640 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 16:20:57.759185    4640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 16:20:57.759233    4640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 16:20:57.769060    4640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 16:20:57.778103    4640 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 16:20:57.778143    4640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 16:20:57.778190    4640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 16:20:57.786612    4640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 16:20:57.794733    4640 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 16:20:57.794754    4640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 16:20:57.794796    4640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 16:20:57.802671    4640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 16:20:57.810242    4640 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 16:20:57.810264    4640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 16:20:57.810299    4640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 16:20:57.818339    4640 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 16:20:57.890449    4640 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0805 16:20:57.890461    4640 command_runner.go:130] > [init] Using Kubernetes version: v1.30.3
	I0805 16:20:57.890501    4640 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 16:20:57.890507    4640 command_runner.go:130] > [preflight] Running pre-flight checks
	I0805 16:20:57.984851    4640 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 16:20:57.984855    4640 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 16:20:57.984956    4640 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 16:20:57.984962    4640 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 16:20:57.985041    4640 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 16:20:57.985038    4640 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 16:20:58.152965    4640 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 16:20:58.152995    4640 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 16:20:58.175785    4640 out.go:204]   - Generating certificates and keys ...
	I0805 16:20:58.175840    4640 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0805 16:20:58.175851    4640 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 16:20:58.175914    4640 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0805 16:20:58.175920    4640 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 16:20:58.229002    4640 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0805 16:20:58.229016    4640 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0805 16:20:58.322701    4640 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0805 16:20:58.322717    4640 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0805 16:20:58.394063    4640 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0805 16:20:58.394077    4640 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0805 16:20:58.601975    4640 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0805 16:20:58.601995    4640 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0805 16:20:58.821056    4640 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0805 16:20:58.821065    4640 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0805 16:20:58.821204    4640 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-985000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0805 16:20:58.821214    4640 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-985000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0805 16:20:59.150811    4640 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0805 16:20:59.150817    4640 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0805 16:20:59.151036    4640 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-985000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0805 16:20:59.151046    4640 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-985000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0805 16:20:59.206073    4640 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0805 16:20:59.206088    4640 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0805 16:20:59.294956    4640 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0805 16:20:59.294966    4640 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0805 16:20:59.348591    4640 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0805 16:20:59.348602    4640 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0805 16:20:59.348788    4640 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 16:20:59.348797    4640 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 16:20:59.511379    4640 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 16:20:59.511395    4640 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 16:20:59.789652    4640 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 16:20:59.789666    4640 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 16:20:59.965508    4640 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 16:20:59.965517    4640 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 16:21:00.208268    4640 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 16:21:00.208284    4640 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 16:21:00.402575    4640 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 16:21:00.402582    4640 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 16:21:00.409122    4640 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 16:21:00.409137    4640 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 16:21:00.410639    4640 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 16:21:00.410652    4640 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 16:21:00.430944    4640 out.go:204]   - Booting up control plane ...
	I0805 16:21:00.431017    4640 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 16:21:00.431032    4640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 16:21:00.431106    4640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 16:21:00.431106    4640 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 16:21:00.431174    4640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 16:21:00.431182    4640 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 16:21:00.431274    4640 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 16:21:00.431286    4640 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 16:21:00.431361    4640 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 16:21:00.431369    4640 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 16:21:00.431399    4640 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 16:21:00.431405    4640 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0805 16:21:00.540991    4640 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 16:21:00.541004    4640 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 16:21:00.541076    4640 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 16:21:00.541081    4640 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 16:21:01.042556    4640 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.719164ms
	I0805 16:21:01.042573    4640 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.719164ms
	I0805 16:21:01.042632    4640 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 16:21:01.042639    4640 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 16:21:05.541995    4640 kubeadm.go:310] [api-check] The API server is healthy after 4.502407968s
	I0805 16:21:05.542014    4640 command_runner.go:130] > [api-check] The API server is healthy after 4.502407968s
	I0805 16:21:05.551474    4640 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 16:21:05.551486    4640 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 16:21:05.558278    4640 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 16:21:05.558284    4640 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 16:21:05.572116    4640 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 16:21:05.572130    4640 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0805 16:21:05.572281    4640 kubeadm.go:310] [mark-control-plane] Marking the node multinode-985000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 16:21:05.572292    4640 command_runner.go:130] > [mark-control-plane] Marking the node multinode-985000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 16:21:05.579214    4640 kubeadm.go:310] [bootstrap-token] Using token: 0mwls8.ribzsy6ooov2flu0
	I0805 16:21:05.579225    4640 command_runner.go:130] > [bootstrap-token] Using token: 0mwls8.ribzsy6ooov2flu0
	I0805 16:21:05.613851    4640 out.go:204]   - Configuring RBAC rules ...
	I0805 16:21:05.613974    4640 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 16:21:05.613988    4640 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 16:21:05.655317    4640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 16:21:05.655329    4640 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 16:21:05.659733    4640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 16:21:05.659737    4640 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 16:21:05.661608    4640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 16:21:05.661619    4640 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 16:21:05.663605    4640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 16:21:05.663612    4640 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 16:21:05.665771    4640 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 16:21:05.665778    4640 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 16:21:05.947572    4640 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 16:21:05.947585    4640 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 16:21:06.357765    4640 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 16:21:06.357776    4640 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0805 16:21:06.946930    4640 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 16:21:06.946942    4640 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0805 16:21:06.947937    4640 kubeadm.go:310] 
	I0805 16:21:06.947989    4640 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 16:21:06.947996    4640 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0805 16:21:06.948000    4640 kubeadm.go:310] 
	I0805 16:21:06.948071    4640 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 16:21:06.948080    4640 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0805 16:21:06.948088    4640 kubeadm.go:310] 
	I0805 16:21:06.948121    4640 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 16:21:06.948125    4640 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0805 16:21:06.948179    4640 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 16:21:06.948187    4640 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 16:21:06.948229    4640 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 16:21:06.948234    4640 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 16:21:06.948237    4640 kubeadm.go:310] 
	I0805 16:21:06.948284    4640 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 16:21:06.948302    4640 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0805 16:21:06.948309    4640 kubeadm.go:310] 
	I0805 16:21:06.948354    4640 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 16:21:06.948367    4640 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 16:21:06.948375    4640 kubeadm.go:310] 
	I0805 16:21:06.948414    4640 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 16:21:06.948418    4640 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0805 16:21:06.948479    4640 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 16:21:06.948488    4640 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 16:21:06.948558    4640 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 16:21:06.948564    4640 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 16:21:06.948570    4640 kubeadm.go:310] 
	I0805 16:21:06.948633    4640 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 16:21:06.948638    4640 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0805 16:21:06.948701    4640 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 16:21:06.948708    4640 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0805 16:21:06.948715    4640 kubeadm.go:310] 
	I0805 16:21:06.948788    4640 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0mwls8.ribzsy6ooov2flu0 \
	I0805 16:21:06.948795    4640 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 0mwls8.ribzsy6ooov2flu0 \
	I0805 16:21:06.948879    4640 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:524477c6809305b6c0c2d082a15767bdfc04953bf05f4ba28f6a5db30aba8adf \
	I0805 16:21:06.948886    4640 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:524477c6809305b6c0c2d082a15767bdfc04953bf05f4ba28f6a5db30aba8adf \
	I0805 16:21:06.948905    4640 kubeadm.go:310] 	--control-plane 
	I0805 16:21:06.948911    4640 command_runner.go:130] > 	--control-plane 
	I0805 16:21:06.948916    4640 kubeadm.go:310] 
	I0805 16:21:06.948980    4640 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 16:21:06.948984    4640 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0805 16:21:06.948987    4640 kubeadm.go:310] 
	I0805 16:21:06.949052    4640 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0mwls8.ribzsy6ooov2flu0 \
	I0805 16:21:06.949057    4640 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 0mwls8.ribzsy6ooov2flu0 \
	I0805 16:21:06.949136    4640 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:524477c6809305b6c0c2d082a15767bdfc04953bf05f4ba28f6a5db30aba8adf 
	I0805 16:21:06.949141    4640 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:524477c6809305b6c0c2d082a15767bdfc04953bf05f4ba28f6a5db30aba8adf 
	I0805 16:21:06.949613    4640 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 16:21:06.949621    4640 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 16:21:06.949644    4640 cni.go:84] Creating CNI manager for ""
	I0805 16:21:06.949649    4640 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 16:21:06.972147    4640 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0805 16:21:07.030449    4640 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0805 16:21:07.036220    4640 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0805 16:21:07.036233    4640 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0805 16:21:07.036239    4640 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0805 16:21:07.036249    4640 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0805 16:21:07.036254    4640 command_runner.go:130] > Access: 2024-08-05 23:20:43.694299549 +0000
	I0805 16:21:07.036259    4640 command_runner.go:130] > Modify: 2024-07-29 16:10:03.000000000 +0000
	I0805 16:21:07.036264    4640 command_runner.go:130] > Change: 2024-08-05 23:20:41.058596444 +0000
	I0805 16:21:07.036266    4640 command_runner.go:130] >  Birth: -
	I0805 16:21:07.036368    4640 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0805 16:21:07.036375    4640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0805 16:21:07.050414    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0805 16:21:07.243070    4640 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0805 16:21:07.246445    4640 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0805 16:21:07.250670    4640 command_runner.go:130] > serviceaccount/kindnet created
	I0805 16:21:07.255971    4640 command_runner.go:130] > daemonset.apps/kindnet created
	I0805 16:21:07.257424    4640 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 16:21:07.257500    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-985000 minikube.k8s.io/updated_at=2024_08_05T16_21_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4 minikube.k8s.io/name=multinode-985000 minikube.k8s.io/primary=true
	I0805 16:21:07.257502    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:07.266956    4640 command_runner.go:130] > -16
	I0805 16:21:07.267023    4640 ops.go:34] apiserver oom_adj: -16
	I0805 16:21:07.390396    4640 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0805 16:21:07.392070    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:07.400579    4640 command_runner.go:130] > node/multinode-985000 labeled
	I0805 16:21:07.456213    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:07.893323    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:07.956622    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:08.392391    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:08.450793    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:08.892411    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:08.950456    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:09.393238    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:09.450291    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:09.892156    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:09.951159    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:10.393019    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:10.451734    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:10.893100    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:10.954360    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:11.393009    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:11.452879    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:11.894187    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:11.953480    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:12.392194    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:12.452444    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:12.894265    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:12.955367    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:13.392882    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:13.455680    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:13.892568    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:13.950195    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:14.393254    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:14.452940    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:14.892187    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:14.948447    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:15.392762    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:15.451815    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:15.892531    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:15.952781    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:16.393008    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:16.454659    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:16.892423    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:16.957989    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:17.392489    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:17.452653    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:17.892453    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:17.953809    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:18.392692    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:18.450726    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:18.893940    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:18.957266    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:19.393402    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:19.452345    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:19.892761    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:19.952524    4640 command_runner.go:130] > NAME      SECRETS   AGE
	I0805 16:21:19.952537    4640 command_runner.go:130] > default   0         1s
	I0805 16:21:19.952551    4640 kubeadm.go:1113] duration metric: took 12.695106906s to wait for elevateKubeSystemPrivileges
	I0805 16:21:19.952568    4640 kubeadm.go:394] duration metric: took 22.244643678s to StartCluster
	I0805 16:21:19.952584    4640 settings.go:142] acquiring lock: {Name:mk564a817a54ecf2aef16a4d2309e85208c0231f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:21:19.952678    4640 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:21:19.953130    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/kubeconfig: {Name:mk2a0d8b4d330b3c26432fc65d015ddf98a9cc93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:21:19.953387    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0805 16:21:19.953391    4640 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:21:19.953437    4640 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 16:21:19.953474    4640 addons.go:69] Setting storage-provisioner=true in profile "multinode-985000"
	I0805 16:21:19.953501    4640 addons.go:234] Setting addon storage-provisioner=true in "multinode-985000"
	I0805 16:21:19.953507    4640 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:21:19.953501    4640 addons.go:69] Setting default-storageclass=true in profile "multinode-985000"
	I0805 16:21:19.953520    4640 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:21:19.953542    4640 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-985000"
	I0805 16:21:19.953772    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.953787    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.953870    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.953897    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.962985    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52500
	I0805 16:21:19.963341    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52502
	I0805 16:21:19.963365    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.963645    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.963722    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.963735    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.963997    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.964004    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.964027    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.964249    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.964372    4640 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:21:19.964430    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.964458    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:19.964465    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.964535    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:21:19.966651    4640 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:21:19.966874    4640 kapi.go:59] client config for multinode-985000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xed05060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:21:19.967275    4640 cert_rotation.go:137] Starting client certificate rotation controller
	I0805 16:21:19.967411    4640 addons.go:234] Setting addon default-storageclass=true in "multinode-985000"
	I0805 16:21:19.967434    4640 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:21:19.967665    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.967688    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.973226    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52504
	I0805 16:21:19.973568    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.973922    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.973942    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.974163    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.974282    4640 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:21:19.974363    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:19.974444    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:21:19.975405    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:21:19.975491    4640 out.go:177] * Verifying Kubernetes components...
	I0805 16:21:19.976182    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52506
	I0805 16:21:19.976461    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.976795    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.976812    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.976999    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.977392    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.977409    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.986027    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52508
	I0805 16:21:19.986361    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.986712    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.986741    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.986959    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.987071    4640 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:21:19.987149    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:19.987227    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:21:19.988179    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:21:19.988299    4640 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 16:21:19.988307    4640 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 16:21:19.988315    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:21:19.988395    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:21:19.988484    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:21:19.988568    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:21:19.988639    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:21:20.032241    4640 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:21:20.032361    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:21:20.069496    4640 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 16:21:20.069510    4640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 16:21:20.069530    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:21:20.069717    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:21:20.069824    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:21:20.069935    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:21:20.070041    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:21:20.084762    4640 command_runner.go:130] > apiVersion: v1
	I0805 16:21:20.084775    4640 command_runner.go:130] > data:
	I0805 16:21:20.084779    4640 command_runner.go:130] >   Corefile: |
	I0805 16:21:20.084782    4640 command_runner.go:130] >     .:53 {
	I0805 16:21:20.084785    4640 command_runner.go:130] >         errors
	I0805 16:21:20.084790    4640 command_runner.go:130] >         health {
	I0805 16:21:20.084794    4640 command_runner.go:130] >            lameduck 5s
	I0805 16:21:20.084796    4640 command_runner.go:130] >         }
	I0805 16:21:20.084812    4640 command_runner.go:130] >         ready
	I0805 16:21:20.084822    4640 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0805 16:21:20.084829    4640 command_runner.go:130] >            pods insecure
	I0805 16:21:20.084833    4640 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0805 16:21:20.084841    4640 command_runner.go:130] >            ttl 30
	I0805 16:21:20.084853    4640 command_runner.go:130] >         }
	I0805 16:21:20.084863    4640 command_runner.go:130] >         prometheus :9153
	I0805 16:21:20.084868    4640 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0805 16:21:20.084880    4640 command_runner.go:130] >            max_concurrent 1000
	I0805 16:21:20.084884    4640 command_runner.go:130] >         }
	I0805 16:21:20.084887    4640 command_runner.go:130] >         cache 30
	I0805 16:21:20.084898    4640 command_runner.go:130] >         loop
	I0805 16:21:20.084902    4640 command_runner.go:130] >         reload
	I0805 16:21:20.084905    4640 command_runner.go:130] >         loadbalance
	I0805 16:21:20.084908    4640 command_runner.go:130] >     }
	I0805 16:21:20.084911    4640 command_runner.go:130] > kind: ConfigMap
	I0805 16:21:20.084914    4640 command_runner.go:130] > metadata:
	I0805 16:21:20.084921    4640 command_runner.go:130] >   creationTimestamp: "2024-08-05T23:21:06Z"
	I0805 16:21:20.084926    4640 command_runner.go:130] >   name: coredns
	I0805 16:21:20.084929    4640 command_runner.go:130] >   namespace: kube-system
	I0805 16:21:20.084933    4640 command_runner.go:130] >   resourceVersion: "266"
	I0805 16:21:20.084937    4640 command_runner.go:130] >   uid: 5057af03-8824-4e67-a4b6-ef90c1ded7ce
	I0805 16:21:20.085056    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0805 16:21:20.184335    4640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:21:20.203408    4640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 16:21:20.278639    4640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 16:21:20.507141    4640 command_runner.go:130] > configmap/coredns replaced
	I0805 16:21:20.511660    4640 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0805 16:21:20.511929    4640 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:21:20.511932    4640 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:21:20.512124    4640 kapi.go:59] client config for multinode-985000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xed05060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:21:20.512125    4640 kapi.go:59] client config for multinode-985000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xed05060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:21:20.512341    4640 node_ready.go:35] waiting up to 6m0s for node "multinode-985000" to be "Ready" ...
	I0805 16:21:20.512409    4640 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0805 16:21:20.512416    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.512423    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.512424    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:20.512428    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.512430    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.512438    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.512446    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.520076    4640 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0805 16:21:20.520087    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.520092    4640 round_trippers.go:580]     Audit-Id: 304f14c4-a466-4fb6-b401-b28f4df4dfa1
	I0805 16:21:20.520095    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.520103    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.520107    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.520111    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.520113    4640 round_trippers.go:580]     Content-Length: 291
	I0805 16:21:20.520117    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.521443    4640 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0805 16:21:20.521456    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.521464    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.521474    4640 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7bdcac2f-ecae-4bb5-9dd4-4f2479d63a63","resourceVersion":"381","creationTimestamp":"2024-08-05T23:21:06Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0805 16:21:20.521479    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.521487    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.521502    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.521511    4640 round_trippers.go:580]     Audit-Id: bcd9e393-6b08-4ffb-a73b-6e7c430f0212
	I0805 16:21:20.521518    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.521831    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:20.521865    4640 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7bdcac2f-ecae-4bb5-9dd4-4f2479d63a63","resourceVersion":"381","creationTimestamp":"2024-08-05T23:21:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0805 16:21:20.521904    4640 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0805 16:21:20.521914    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.521921    4640 round_trippers.go:473]     Content-Type: application/json
	I0805 16:21:20.521930    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.521935    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.530726    4640 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0805 16:21:20.530739    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.530744    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.530748    4640 round_trippers.go:580]     Content-Length: 291
	I0805 16:21:20.530751    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.530754    4640 round_trippers.go:580]     Audit-Id: ba15a3b2-b69b-473e-a331-81e01385ad47
	I0805 16:21:20.530756    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.530758    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.530761    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.530773    4640 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7bdcac2f-ecae-4bb5-9dd4-4f2479d63a63","resourceVersion":"383","creationTimestamp":"2024-08-05T23:21:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0805 16:21:20.588534    4640 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0805 16:21:20.588563    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.588570    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.588737    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.588752    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.588765    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.588764    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.588772    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.588919    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.588920    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.588931    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.589012    4640 round_trippers.go:463] GET https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses
	I0805 16:21:20.589020    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.589028    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.589034    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.597496    4640 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0805 16:21:20.597508    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.597513    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.597518    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.597521    4640 round_trippers.go:580]     Content-Length: 1273
	I0805 16:21:20.597523    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.597525    4640 round_trippers.go:580]     Audit-Id: d7394cfc-1eb3-4623-8a7f-a5088a0398c8
	I0805 16:21:20.597527    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.597530    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.597844    4640 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"391"},"items":[{"metadata":{"name":"standard","uid":"34b9c98b-1b12-420a-8576-fd00c496f57b","resourceVersion":"387","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0805 16:21:20.598117    4640 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"34b9c98b-1b12-420a-8576-fd00c496f57b","resourceVersion":"387","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0805 16:21:20.598145    4640 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0805 16:21:20.598150    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.598157    4640 round_trippers.go:473]     Content-Type: application/json
	I0805 16:21:20.598166    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.598171    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.619819    4640 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0805 16:21:20.619836    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.619842    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.619846    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.619849    4640 round_trippers.go:580]     Content-Length: 1220
	I0805 16:21:20.619852    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.619855    4640 round_trippers.go:580]     Audit-Id: 299d4cc8-0cb5-4dd5-80b3-5d54592ecd90
	I0805 16:21:20.619859    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.619861    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.619898    4640 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"34b9c98b-1b12-420a-8576-fd00c496f57b","resourceVersion":"387","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0805 16:21:20.619983    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.619992    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.620141    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.620153    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.620166    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.750372    4640 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0805 16:21:20.753871    4640 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0805 16:21:20.759257    4640 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0805 16:21:20.767575    4640 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0805 16:21:20.774745    4640 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0805 16:21:20.786454    4640 command_runner.go:130] > pod/storage-provisioner created
	I0805 16:21:20.787838    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.787851    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.788087    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.788087    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.788098    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.788109    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.788117    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.788261    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.788280    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.788280    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.811467    4640 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0805 16:21:20.871433    4640 addons.go:510] duration metric: took 917.995637ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0805 16:21:21.014507    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:21.014532    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:21.014545    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:21.014553    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:21.014605    4640 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0805 16:21:21.014619    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:21.014631    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:21.014638    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:21.017465    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:21.017464    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:21.017480    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:21.017492    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:21.017492    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:21.017496    4640 round_trippers.go:580]     Content-Length: 291
	I0805 16:21:21.017502    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:21 GMT
	I0805 16:21:21.017504    4640 round_trippers.go:580]     Audit-Id: fb264fed-80ee-469b-a34e-7b1e8460f94b
	I0805 16:21:21.017506    4640 round_trippers.go:580]     Audit-Id: c9362211-8dfc-4385-87db-76c6486df53e
	I0805 16:21:21.017512    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:21.017513    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:21.017518    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:21.017519    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:21.017522    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:21.017524    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:21.017529    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:21.017545    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:21 GMT
	I0805 16:21:21.017616    4640 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7bdcac2f-ecae-4bb5-9dd4-4f2479d63a63","resourceVersion":"395","creationTimestamp":"2024-08-05T23:21:06Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0805 16:21:21.017684    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:21.017735    4640 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-985000" context rescaled to 1 replicas
	I0805 16:21:21.514170    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:21.514200    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:21.514219    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:21.514226    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:21.516804    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:21.516819    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:21.516826    4640 round_trippers.go:580]     Audit-Id: 9396255c-231d-48cb-a53f-22663307b969
	I0805 16:21:21.516830    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:21.516834    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:21.516839    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:21.516849    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:21.516854    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:21 GMT
	I0805 16:21:21.516951    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:22.013275    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:22.013299    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:22.013311    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:22.013319    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:22.016138    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:22.016155    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:22.016163    4640 round_trippers.go:580]     Audit-Id: cc869aef-9ab4-4a7f-8835-cce2afa76dd9
	I0805 16:21:22.016168    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:22.016175    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:22.016182    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:22.016187    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:22.016193    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:22 GMT
	I0805 16:21:22.016497    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:22.512546    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:22.512561    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:22.512567    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:22.512572    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:22.515381    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:22.515393    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:22.515401    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:22.515407    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:22.515412    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:22.515416    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:22 GMT
	I0805 16:21:22.515420    4640 round_trippers.go:580]     Audit-Id: e7d470a0-7df5-4d85-9bb5-cbf15cfa989f
	I0805 16:21:22.515423    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:22.515634    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:22.515838    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:23.012594    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:23.012606    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:23.012612    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:23.012616    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:23.014085    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:23.014095    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:23.014101    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:23.014104    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:23.014107    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:23.014109    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:23.014113    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:23 GMT
	I0805 16:21:23.014116    4640 round_trippers.go:580]     Audit-Id: e12d5034-3bd9-498b-844e-12133805ded9
	I0805 16:21:23.014306    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:23.513150    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:23.513163    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:23.513168    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:23.513172    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:23.514595    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:23.514604    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:23.514610    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:23.514614    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:23.514617    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:23.514619    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:23.514622    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:23 GMT
	I0805 16:21:23.514635    4640 round_trippers.go:580]     Audit-Id: 2bc52e3b-1575-453f-87fa-51f4301a9426
	I0805 16:21:23.514871    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:24.012814    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:24.012826    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:24.012832    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:24.012835    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:24.014366    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:24.014379    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:24.014384    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:24.014388    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:24.014406    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:24.014411    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:24.014414    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:24 GMT
	I0805 16:21:24.014417    4640 round_trippers.go:580]     Audit-Id: f14d8611-e5e1-45fe-92f3-95559148c71b
	I0805 16:21:24.014572    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:24.513607    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:24.513620    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:24.513626    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:24.513629    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:24.515210    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:24.515220    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:24.515242    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:24.515253    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:24.515260    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:24.515264    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:24.515268    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:24 GMT
	I0805 16:21:24.515271    4640 round_trippers.go:580]     Audit-Id: 0a897d84-d437-4212-b36d-e414fedf55d4
	I0805 16:21:24.515427    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:25.013253    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:25.013272    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:25.013283    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:25.013321    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:25.015275    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:25.015308    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:25.015317    4640 round_trippers.go:580]     Audit-Id: ced7b45c-a072-4322-89ab-d0cc21ddfb1d
	I0805 16:21:25.015322    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:25.015325    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:25.015328    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:25.015332    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:25.015336    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:25 GMT
	I0805 16:21:25.015627    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:25.015849    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:25.512881    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:25.512902    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:25.512914    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:25.512920    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:25.515502    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:25.515517    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:25.515524    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:25.515529    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:25.515534    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:25.515538    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:25.515542    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:25 GMT
	I0805 16:21:25.515545    4640 round_trippers.go:580]     Audit-Id: dd6b59c1-dde3-4d67-b446-8823ad717d4f
	I0805 16:21:25.515665    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:26.013787    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:26.013811    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:26.013824    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:26.013830    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:26.016420    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:26.016440    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:26.016463    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:26 GMT
	I0805 16:21:26.016470    4640 round_trippers.go:580]     Audit-Id: 19939705-2879-44e6-830c-0c86394087ed
	I0805 16:21:26.016473    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:26.016485    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:26.016490    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:26.016494    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:26.016965    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:26.512523    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:26.512536    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:26.512541    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:26.512544    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:26.514158    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:26.514167    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:26.514172    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:26.514176    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:26.514179    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:26.514182    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:26 GMT
	I0805 16:21:26.514184    4640 round_trippers.go:580]     Audit-Id: f2346665-2701-41e1-94b0-41a70aa2f170
	I0805 16:21:26.514187    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:26.514489    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:27.013107    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:27.013136    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:27.013148    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:27.013155    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:27.015615    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:27.015632    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:27.015639    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:27 GMT
	I0805 16:21:27.015655    4640 round_trippers.go:580]     Audit-Id: 6abee22d-c1db-48e9-99db-e07791ed571f
	I0805 16:21:27.015661    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:27.015664    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:27.015667    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:27.015672    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:27.015747    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:27.015996    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:27.513549    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:27.513570    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:27.513582    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:27.513589    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:27.516173    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:27.516189    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:27.516197    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:27.516200    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:27.516204    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:27.516209    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:27 GMT
	I0805 16:21:27.516212    4640 round_trippers.go:580]     Audit-Id: a227585b-ae23-4bd1-b1dc-643eadd970cc
	I0805 16:21:27.516215    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:27.516416    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:28.014104    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:28.014132    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:28.014143    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:28.014159    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:28.016690    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:28.016705    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:28.016713    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:28.016717    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:28.016721    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:28.016725    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:28.016728    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:28 GMT
	I0805 16:21:28.016731    4640 round_trippers.go:580]     Audit-Id: 0d14831c-cc1f-41a9-a252-85e191b9594d
	I0805 16:21:28.016834    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:28.512703    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:28.512726    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:28.512739    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:28.512747    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:28.515176    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:28.515190    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:28.515197    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:28 GMT
	I0805 16:21:28.515201    4640 round_trippers.go:580]     Audit-Id: 6af459f8-bb08-43bf-ac7f-51ccacd5d664
	I0805 16:21:28.515206    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:28.515211    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:28.515215    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:28.515219    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:28.515378    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:29.013324    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:29.013354    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:29.013360    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:29.013364    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:29.014793    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:29.014804    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:29.014809    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:29.014813    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:29 GMT
	I0805 16:21:29.014817    4640 round_trippers.go:580]     Audit-Id: 2e50ff34-0c55-4136-b537-eee73f73706d
	I0805 16:21:29.014819    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:29.014822    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:29.014826    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:29.015098    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:29.513802    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:29.513832    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:29.513844    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:29.513852    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:29.516479    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:29.516496    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:29.516504    4640 round_trippers.go:580]     Audit-Id: bcbc3920-26b4-45f4-b91a-ce0e3dc11770
	I0805 16:21:29.516529    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:29.516538    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:29.516544    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:29.516549    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:29.516554    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:29 GMT
	I0805 16:21:29.516682    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:29.516938    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:30.013325    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:30.013349    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:30.013436    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:30.013448    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:30.016209    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:30.016222    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:30.016228    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:30.016233    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:30 GMT
	I0805 16:21:30.016238    4640 round_trippers.go:580]     Audit-Id: fb0bd3e0-89c3-4c77-a27d-be315cab22b7
	I0805 16:21:30.016242    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:30.016277    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:30.016283    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:30.016477    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:30.514344    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:30.514386    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:30.514482    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:30.514494    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:30.518828    4640 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 16:21:30.518860    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:30.518870    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:30.518876    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:30.518882    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:30 GMT
	I0805 16:21:30.518888    4640 round_trippers.go:580]     Audit-Id: c1b08932-ee78-4dcb-a190-3a8b24421284
	I0805 16:21:30.518894    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:30.518899    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:30.519002    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:31.012673    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:31.012701    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:31.012712    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:31.012718    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:31.015543    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:31.015560    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:31.015568    4640 round_trippers.go:580]     Audit-Id: b6586a64-ec07-44ee-8a00-1f3b8a00e0bd
	I0805 16:21:31.015572    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:31.015576    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:31.015580    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:31.015583    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:31.015589    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:31 GMT
	I0805 16:21:31.015682    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:31.512531    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:31.512543    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:31.512550    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:31.512554    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:31.514066    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:31.514076    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:31.514081    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:31 GMT
	I0805 16:21:31.514085    4640 round_trippers.go:580]     Audit-Id: 7d410de7-b0d5-4d4e-8455-d31b0df7d302
	I0805 16:21:31.514089    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:31.514093    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:31.514096    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:31.514107    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:31.514758    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:32.014110    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:32.014136    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:32.014147    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:32.014157    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:32.016553    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:32.016570    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:32.016580    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:32.016586    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:32.016592    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:32.016598    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:32.016602    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:32 GMT
	I0805 16:21:32.016605    4640 round_trippers.go:580]     Audit-Id: 67fdb64b-273a-46c2-aac5-c3b115422aa4
	I0805 16:21:32.016861    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:32.017132    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:32.513171    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:32.513188    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:32.513195    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:32.513198    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:32.514908    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:32.514920    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:32.514925    4640 round_trippers.go:580]     Audit-Id: 0f5a2e98-6be6-4963-8897-91c70642048c
	I0805 16:21:32.514928    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:32.514931    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:32.514933    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:32.514936    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:32.514939    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:32 GMT
	I0805 16:21:32.515082    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:33.013769    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:33.013803    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:33.013814    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:33.013822    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:33.016491    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:33.016509    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:33.016519    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:33 GMT
	I0805 16:21:33.016526    4640 round_trippers.go:580]     Audit-Id: 96b5f269-7be9-42a9-9687-cba57d05f76e
	I0805 16:21:33.016532    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:33.016538    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:33.016543    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:33.016548    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:33.016715    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:33.512751    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:33.512772    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:33.512783    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:33.512789    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:33.515431    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:33.515480    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:33.515498    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:33.515506    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:33.515510    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:33 GMT
	I0805 16:21:33.515513    4640 round_trippers.go:580]     Audit-Id: 6cd252a3-d07d-441e-bcf4-bc3bd00c2488
	I0805 16:21:33.515517    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:33.515520    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:33.515747    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:34.013003    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:34.013032    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:34.013043    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:34.013052    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:34.015447    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:34.015465    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:34.015472    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:34.015476    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:34.015479    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:34.015484    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:34.015487    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:34 GMT
	I0805 16:21:34.015492    4640 round_trippers.go:580]     Audit-Id: efcfb0d1-8345-4db5-bce9-e31085842da3
	I0805 16:21:34.015599    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:34.513298    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:34.513317    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:34.513376    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:34.513383    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:34.515051    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:34.515065    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:34.515072    4640 round_trippers.go:580]     Audit-Id: 2a42cb6a-0051-47bd-85f4-9f8ca80afa70
	I0805 16:21:34.515078    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:34.515081    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:34.515087    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:34.515099    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:34.515103    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:34 GMT
	I0805 16:21:34.515359    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:34.515540    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:35.013932    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:35.013957    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:35.013968    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:35.013976    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:35.016505    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:35.016524    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:35.016530    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:35.016537    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:35.016541    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:35.016544    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:35.016555    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:35 GMT
	I0805 16:21:35.016559    4640 round_trippers.go:580]     Audit-Id: 09fa0e04-c026-439e-9cd7-392fd82b16fe
	I0805 16:21:35.016913    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:35.513491    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:35.513514    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:35.513526    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:35.513532    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:35.515995    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:35.516012    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:35.516020    4640 round_trippers.go:580]     Audit-Id: a2b05a8a-9a91-4d20-93d0-b8701ac59b95
	I0805 16:21:35.516024    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:35.516036    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:35.516041    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:35.516055    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:35.516060    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:35 GMT
	I0805 16:21:35.516151    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:36.013521    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:36.013549    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.013561    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.013566    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.016095    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:36.016112    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.016119    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.016131    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.016136    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.016140    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.016144    4640 round_trippers.go:580]     Audit-Id: 77e04f39-a037-4ea2-9716-ad04139089d1
	I0805 16:21:36.016147    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.016230    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"423","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0805 16:21:36.016465    4640 node_ready.go:49] node "multinode-985000" has status "Ready":"True"
	I0805 16:21:36.016481    4640 node_ready.go:38] duration metric: took 15.504115701s for node "multinode-985000" to be "Ready" ...
	I0805 16:21:36.016489    4640 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:21:36.016543    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:36.016551    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.016559    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.016563    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.019046    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:36.019057    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.019065    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.019069    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.019078    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.019081    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.019084    4640 round_trippers.go:580]     Audit-Id: 96048303-6e62-4ba8-a291-bc1ad976756e
	I0805 16:21:36.019091    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.019721    4640 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"427","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56289 chars]
	I0805 16:21:36.021921    4640 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:36.021960    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:21:36.021964    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.021970    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.021974    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.023179    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:36.023187    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.023192    4640 round_trippers.go:580]     Audit-Id: ba42f387-f106-4773-86de-3a22085fd86a
	I0805 16:21:36.023195    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.023198    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.023200    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.023204    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.023208    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.023410    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"427","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0805 16:21:36.023652    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:36.023659    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.023665    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.023671    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.024732    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:36.024744    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.024752    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.024758    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.024765    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.024768    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.024771    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.024775    4640 round_trippers.go:580]     Audit-Id: 2008721c-b230-4e73-b037-d3a843d7c7c8
	I0805 16:21:36.024909    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"423","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0805 16:21:36.523495    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:21:36.523508    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.523514    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.523519    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.525003    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:36.525014    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.525020    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.525042    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.525049    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.525053    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.525060    4640 round_trippers.go:580]     Audit-Id: 1ad5a8dd-64b3-4881-9a8e-e5eaab368c53
	I0805 16:21:36.525066    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.525202    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"427","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0805 16:21:36.525483    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:36.525490    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.525498    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.525502    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.526801    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:36.526810    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.526814    4640 round_trippers.go:580]     Audit-Id: 71c4017f-a267-489e-86ed-59098eae3b88
	I0805 16:21:36.526817    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.526834    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.526840    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.526846    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.526850    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.527025    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"423","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0805 16:21:37.022759    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:21:37.022781    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.022791    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.022799    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.025487    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:37.025503    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.025510    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.025515    4640 round_trippers.go:580]     Audit-Id: 7446d9fd-22ed-4d20-b0f2-e8c4a88b04f4
	I0805 16:21:37.025536    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.025543    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.025547    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.025556    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.025649    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"427","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0805 16:21:37.026010    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.026020    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.026028    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.026033    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.027337    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:37.027346    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.027354    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.027359    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.027363    4640 round_trippers.go:580]     Audit-Id: a309eed4-f088-47f7-8b84-4761b59dbb8c
	I0805 16:21:37.027366    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.027368    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.027371    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.027425    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.522283    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:21:37.522304    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.522315    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.522322    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.524762    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:37.524776    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.524782    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.524788    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.524792    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.524795    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.524799    4640 round_trippers.go:580]     Audit-Id: eaef42a8-7b43-4091-9b70-8d31adc979e5
	I0805 16:21:37.524803    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.525073    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"443","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6576 chars]
	I0805 16:21:37.525438    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.525480    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.525488    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.525492    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.526890    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:37.526903    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.526912    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.526918    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.526927    4640 round_trippers.go:580]     Audit-Id: a3a0e71a-c982-4504-9fae-e76101688c05
	I0805 16:21:37.526931    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.526935    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.526937    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.527034    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.527211    4640 pod_ready.go:92] pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.527220    4640 pod_ready.go:81] duration metric: took 1.505289062s for pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.527230    4640 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.527259    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-985000
	I0805 16:21:37.527264    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.527269    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.527277    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.528379    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:37.528389    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.528394    4640 round_trippers.go:580]     Audit-Id: 3cf4f372-47fb-4b72-9b30-185d93d01537
	I0805 16:21:37.528401    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.528405    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.528408    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.528411    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.528414    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.528618    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-985000","namespace":"kube-system","uid":"8d7ca2d9-8c7b-41b9-a199-de6449107471","resourceVersion":"379","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.mirror":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.seen":"2024-08-05T23:21:06.366030299Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0805 16:21:37.528833    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.528840    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.528845    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.528850    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.529802    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.529808    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.529813    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.529816    4640 round_trippers.go:580]     Audit-Id: 314df0bd-894e-4607-bad0-3348c18fe807
	I0805 16:21:37.529820    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.529823    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.529826    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.529833    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.530046    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.530203    4640 pod_ready.go:92] pod "etcd-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.530210    4640 pod_ready.go:81] duration metric: took 2.974841ms for pod "etcd-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.530218    4640 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.530249    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-985000
	I0805 16:21:37.530253    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.530259    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.530262    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.531449    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:37.531456    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.531461    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.531463    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.531467    4640 round_trippers.go:580]     Audit-Id: 1801a8f0-22d5-44e8-942c-ea521b1ffa66
	I0805 16:21:37.531469    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.531475    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.531477    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.531592    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-985000","namespace":"kube-system","uid":"9be3378a-5fab-4907-baad-507918e714e4","resourceVersion":"369","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"5908531d711118eab279d6b15448dc42","kubernetes.io/config.mirror":"5908531d711118eab279d6b15448dc42","kubernetes.io/config.seen":"2024-08-05T23:21:06.366030949Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7684 chars]
	I0805 16:21:37.531810    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.531820    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.531825    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.531830    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.532663    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.532668    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.532672    4640 round_trippers.go:580]     Audit-Id: 6d0fc4ed-c609-4ee7-a57f-b61eed1bc442
	I0805 16:21:37.532675    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.532679    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.532682    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.532684    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.532688    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.532807    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.532958    4640 pod_ready.go:92] pod "kube-apiserver-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.532967    4640 pod_ready.go:81] duration metric: took 2.743443ms for pod "kube-apiserver-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.532973    4640 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.533000    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-985000
	I0805 16:21:37.533004    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.533009    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.533012    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.533987    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.533995    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.534000    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.534004    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.534020    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.534027    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.534031    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.534034    4640 round_trippers.go:580]     Audit-Id: 97e4dc5c-f4bf-419e-8b15-be800418054c
	I0805 16:21:37.534147    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-985000","namespace":"kube-system","uid":"4ad64361-65de-4b0b-b2a3-07df18c2e603","resourceVersion":"342","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e41fb21b40cd2f3bd83b000891f6569","kubernetes.io/config.mirror":"8e41fb21b40cd2f3bd83b000891f6569","kubernetes.io/config.seen":"2024-08-05T23:21:06.366027130Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7259 chars]
	I0805 16:21:37.534370    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.534377    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.534383    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.534386    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.535293    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.535301    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.535305    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.535308    4640 round_trippers.go:580]     Audit-Id: a4c04a0a-9401-41d1-a0fc-f2a2187abde4
	I0805 16:21:37.535310    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.535313    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.535320    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.535323    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.535432    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.535591    4640 pod_ready.go:92] pod "kube-controller-manager-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.535599    4640 pod_ready.go:81] duration metric: took 2.621545ms for pod "kube-controller-manager-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.535606    4640 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fwgw7" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.535629    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwgw7
	I0805 16:21:37.535634    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.535639    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.535643    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.536550    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.536557    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.536565    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.536570    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.536575    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.536578    4640 round_trippers.go:580]     Audit-Id: 5a688e80-7db3-4070-a1a8-c3419ddb4d44
	I0805 16:21:37.536580    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.536582    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.536704    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fwgw7","generateName":"kube-proxy-","namespace":"kube-system","uid":"3fb72e39-699d-4123-ae5e-e314a191d904","resourceVersion":"409","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8b6258e6-7b31-4600-b32b-4a269867c123","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8b6258e6-7b31-4600-b32b-4a269867c123\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0805 16:21:37.614745    4640 request.go:629] Waited for 77.807971ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.614815    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.614822    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.614839    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.614845    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.616956    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:37.616984    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.616989    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.616993    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.616996    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.616999    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.617002    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.617005    4640 round_trippers.go:580]     Audit-Id: e297627c-4c52-417b-935c-d406bf086c16
	I0805 16:21:37.617232    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.617428    4640 pod_ready.go:92] pod "kube-proxy-fwgw7" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.617437    4640 pod_ready.go:81] duration metric: took 81.82693ms for pod "kube-proxy-fwgw7" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.617444    4640 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.815296    4640 request.go:629] Waited for 197.761592ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-985000
	I0805 16:21:37.815347    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-985000
	I0805 16:21:37.815355    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.815366    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.815376    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.817961    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:37.817976    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.818001    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.818008    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:37.818049    4640 round_trippers.go:580]     Audit-Id: cc44c4e8-8012-4718-aa24-c05fec399a2e
	I0805 16:21:37.818064    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.818078    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.818082    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.818186    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-985000","namespace":"kube-system","uid":"5e23b1b7-e45d-4b43-831c-aa835c5e536d","resourceVersion":"396","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d110ae14602908970c81c0d8a5c21147","kubernetes.io/config.mirror":"d110ae14602908970c81c0d8a5c21147","kubernetes.io/config.seen":"2024-08-05T23:21:06.366029633Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0805 16:21:38.014472    4640 request.go:629] Waited for 195.947535ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:38.014569    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:38.014578    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.014589    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.014597    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.017395    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:38.017406    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.017413    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.017418    4640 round_trippers.go:580]     Audit-Id: 925efcbc-f43b-4431-905e-26927bb76a48
	I0805 16:21:38.017422    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.017428    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.017434    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.017441    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.017905    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:38.018153    4640 pod_ready.go:92] pod "kube-scheduler-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:38.018164    4640 pod_ready.go:81] duration metric: took 400.713995ms for pod "kube-scheduler-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:38.018173    4640 pod_ready.go:38] duration metric: took 2.001673669s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:21:38.018198    4640 api_server.go:52] waiting for apiserver process to appear ...
	I0805 16:21:38.018268    4640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:21:38.030133    4640 command_runner.go:130] > 1977
	I0805 16:21:38.030360    4640 api_server.go:72] duration metric: took 18.07694495s to wait for apiserver process to appear ...
	I0805 16:21:38.030369    4640 api_server.go:88] waiting for apiserver healthz status ...
	I0805 16:21:38.030384    4640 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:21:38.034009    4640 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0805 16:21:38.034048    4640 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I0805 16:21:38.034052    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.034058    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.034063    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.034646    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:38.034653    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.034658    4640 round_trippers.go:580]     Audit-Id: 9f5c9766-330c-4bb5-a5de-4c3a0fdbe474
	I0805 16:21:38.034662    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.034665    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.034668    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.034670    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.034673    4640 round_trippers.go:580]     Content-Length: 263
	I0805 16:21:38.034676    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.034687    4640 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0805 16:21:38.034733    4640 api_server.go:141] control plane version: v1.30.3
	I0805 16:21:38.034742    4640 api_server.go:131] duration metric: took 4.369143ms to wait for apiserver health ...
	I0805 16:21:38.034747    4640 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 16:21:38.213812    4640 request.go:629] Waited for 178.999213ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:38.213950    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:38.213960    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.213970    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.213980    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.217309    4640 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:21:38.217324    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.217331    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.217336    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.217363    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.217372    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.217377    4640 round_trippers.go:580]     Audit-Id: 0f21513f-44e7-4d2f-bacd-2a12fceef757
	I0805 16:21:38.217381    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.217979    4640 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"443","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0805 16:21:38.219249    4640 system_pods.go:59] 8 kube-system pods found
	I0805 16:21:38.219261    4640 system_pods.go:61] "coredns-7db6d8ff4d-fqtll" [4d8af129-475b-4185-8b0d-cbda67812964] Running
	I0805 16:21:38.219265    4640 system_pods.go:61] "etcd-multinode-985000" [8d7ca2d9-8c7b-41b9-a199-de6449107471] Running
	I0805 16:21:38.219268    4640 system_pods.go:61] "kindnet-tvtvg" [7dd4afe7-2a17-4298-823b-9955e43cfdb2] Running
	I0805 16:21:38.219271    4640 system_pods.go:61] "kube-apiserver-multinode-985000" [9be3378a-5fab-4907-baad-507918e714e4] Running
	I0805 16:21:38.219276    4640 system_pods.go:61] "kube-controller-manager-multinode-985000" [4ad64361-65de-4b0b-b2a3-07df18c2e603] Running
	I0805 16:21:38.219278    4640 system_pods.go:61] "kube-proxy-fwgw7" [3fb72e39-699d-4123-ae5e-e314a191d904] Running
	I0805 16:21:38.219280    4640 system_pods.go:61] "kube-scheduler-multinode-985000" [5e23b1b7-e45d-4b43-831c-aa835c5e536d] Running
	I0805 16:21:38.219283    4640 system_pods.go:61] "storage-provisioner" [72ec8458-5c62-43eb-9120-0146e6ccaf8f] Running
	I0805 16:21:38.219286    4640 system_pods.go:74] duration metric: took 184.535842ms to wait for pod list to return data ...
	I0805 16:21:38.219291    4640 default_sa.go:34] waiting for default service account to be created ...
	I0805 16:21:38.413643    4640 request.go:629] Waited for 194.308242ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:21:38.413680    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:21:38.413687    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.413695    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.413699    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.415522    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:38.415531    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.415536    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.415539    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.415543    4640 round_trippers.go:580]     Content-Length: 261
	I0805 16:21:38.415546    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.415548    4640 round_trippers.go:580]     Audit-Id: efc85c0c-9bbc-4cb7-8c14-19ba2f873800
	I0805 16:21:38.415551    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.415553    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.415563    4640 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b0626468-f73b-4e9b-8270-658495d43f4a","resourceVersion":"337","creationTimestamp":"2024-08-05T23:21:19Z"}}]}
	I0805 16:21:38.415681    4640 default_sa.go:45] found service account: "default"
	I0805 16:21:38.415690    4640 default_sa.go:55] duration metric: took 196.394719ms for default service account to be created ...
	I0805 16:21:38.415697    4640 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 16:21:38.613742    4640 request.go:629] Waited for 198.012461ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:38.613858    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:38.613864    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.613870    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.613874    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.616077    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:38.616090    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.616099    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.616106    4640 round_trippers.go:580]     Audit-Id: 3f8a6f23-788b-41c4-8dee-6ff59c02c21d
	I0805 16:21:38.616112    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.616116    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.616126    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.616143    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.616489    4640 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"443","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0805 16:21:38.617747    4640 system_pods.go:86] 8 kube-system pods found
	I0805 16:21:38.617761    4640 system_pods.go:89] "coredns-7db6d8ff4d-fqtll" [4d8af129-475b-4185-8b0d-cbda67812964] Running
	I0805 16:21:38.617766    4640 system_pods.go:89] "etcd-multinode-985000" [8d7ca2d9-8c7b-41b9-a199-de6449107471] Running
	I0805 16:21:38.617770    4640 system_pods.go:89] "kindnet-tvtvg" [7dd4afe7-2a17-4298-823b-9955e43cfdb2] Running
	I0805 16:21:38.617773    4640 system_pods.go:89] "kube-apiserver-multinode-985000" [9be3378a-5fab-4907-baad-507918e714e4] Running
	I0805 16:21:38.617776    4640 system_pods.go:89] "kube-controller-manager-multinode-985000" [4ad64361-65de-4b0b-b2a3-07df18c2e603] Running
	I0805 16:21:38.617780    4640 system_pods.go:89] "kube-proxy-fwgw7" [3fb72e39-699d-4123-ae5e-e314a191d904] Running
	I0805 16:21:38.617784    4640 system_pods.go:89] "kube-scheduler-multinode-985000" [5e23b1b7-e45d-4b43-831c-aa835c5e536d] Running
	I0805 16:21:38.617787    4640 system_pods.go:89] "storage-provisioner" [72ec8458-5c62-43eb-9120-0146e6ccaf8f] Running
	I0805 16:21:38.617792    4640 system_pods.go:126] duration metric: took 202.090644ms to wait for k8s-apps to be running ...
	I0805 16:21:38.617801    4640 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 16:21:38.617848    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:21:38.629448    4640 system_svc.go:56] duration metric: took 11.643357ms WaitForService to wait for kubelet
	I0805 16:21:38.629463    4640 kubeadm.go:582] duration metric: took 18.676048708s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:21:38.629475    4640 node_conditions.go:102] verifying NodePressure condition ...
	I0805 16:21:38.814057    4640 request.go:629] Waited for 184.539621ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I0805 16:21:38.814182    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0805 16:21:38.814193    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.814205    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.814213    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.817076    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:38.817092    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.817099    4640 round_trippers.go:580]     Audit-Id: 83bb2c88-8ae3-45b7-a0f6-9d3f9fead5f2
	I0805 16:21:38.817103    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.817112    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.817116    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.817123    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.817128    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:39 GMT
	I0805 16:21:38.817200    4640 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5011 chars]
	I0805 16:21:38.817474    4640 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:21:38.817490    4640 node_conditions.go:123] node cpu capacity is 2
	I0805 16:21:38.817502    4640 node_conditions.go:105] duration metric: took 188.023135ms to run NodePressure ...
	I0805 16:21:38.817512    4640 start.go:241] waiting for startup goroutines ...
	I0805 16:21:38.817520    4640 start.go:246] waiting for cluster config update ...
	I0805 16:21:38.817530    4640 start.go:255] writing updated cluster config ...
	I0805 16:21:38.838343    4640 out.go:177] 
	I0805 16:21:38.859405    4640 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:21:38.859465    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:21:38.881260    4640 out.go:177] * Starting "multinode-985000-m02" worker node in "multinode-985000" cluster
	I0805 16:21:38.923226    4640 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:21:38.923254    4640 cache.go:56] Caching tarball of preloaded images
	I0805 16:21:38.923425    4640 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:21:38.923439    4640 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:21:38.923503    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:21:38.924257    4640 start.go:360] acquireMachinesLock for multinode-985000-m02: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:21:38.924355    4640 start.go:364] duration metric: took 78.775µs to acquireMachinesLock for "multinode-985000-m02"
	I0805 16:21:38.924379    4640 start.go:93] Provisioning new machine with config: &{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0805 16:21:38.924443    4640 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0805 16:21:38.946258    4640 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 16:21:38.946431    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:38.946482    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:38.956315    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52515
	I0805 16:21:38.956651    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:38.957008    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:38.957028    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:38.957245    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:38.957408    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:21:38.957527    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:38.957642    4640 start.go:159] libmachine.API.Create for "multinode-985000" (driver="hyperkit")
	I0805 16:21:38.957663    4640 client.go:168] LocalClient.Create starting
	I0805 16:21:38.957697    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem
	I0805 16:21:38.957735    4640 main.go:141] libmachine: Decoding PEM data...
	I0805 16:21:38.957747    4640 main.go:141] libmachine: Parsing certificate...
	I0805 16:21:38.957790    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem
	I0805 16:21:38.957819    4640 main.go:141] libmachine: Decoding PEM data...
	I0805 16:21:38.957833    4640 main.go:141] libmachine: Parsing certificate...
	I0805 16:21:38.957849    4640 main.go:141] libmachine: Running pre-create checks...
	I0805 16:21:38.957855    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .PreCreateCheck
	I0805 16:21:38.957933    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:38.957959    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetConfigRaw
	I0805 16:21:38.967700    4640 main.go:141] libmachine: Creating machine...
	I0805 16:21:38.967725    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .Create
	I0805 16:21:38.967957    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:38.968233    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | I0805 16:21:38.967940    4677 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:21:38.968338    4640 main.go:141] libmachine: (multinode-985000-m02) Downloading /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1122/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 16:21:39.171726    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | I0805 16:21:39.171650    4677 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa...
	I0805 16:21:39.251408    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | I0805 16:21:39.251327    4677 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/multinode-985000-m02.rawdisk...
	I0805 16:21:39.251421    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Writing magic tar header
	I0805 16:21:39.251439    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Writing SSH key tar header
	I0805 16:21:39.252021    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | I0805 16:21:39.251983    4677 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02 ...
	I0805 16:21:39.622286    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:39.622309    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid
	I0805 16:21:39.622382    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Using UUID ab5b9c9f-9e28-4bc2-8fcd-b98fce011173
	I0805 16:21:39.647304    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Generated MAC a6:1c:88:9c:44:3
	I0805 16:21:39.647324    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000
	I0805 16:21:39.647363    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0805 16:21:39.647396    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0805 16:21:39.647440    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/multinode-985000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage,/Users/j
enkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"}
	I0805 16:21:39.647475    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ab5b9c9f-9e28-4bc2-8fcd-b98fce011173 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/multinode-985000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/mult
inode-985000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"
	I0805 16:21:39.647493    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:21:39.650407    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: Pid is 4678
	I0805 16:21:39.650823    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 0
	I0805 16:21:39.650838    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:39.650909    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:39.651807    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:39.651870    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:39.651899    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:39.651984    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:39.652006    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:39.652022    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:39.652032    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:39.652039    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:39.652046    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:39.652082    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:39.652100    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:39.652113    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:39.652123    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:39.652143    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:39.657903    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:21:39.666018    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:21:39.666937    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:21:39.666963    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:21:39.666975    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:21:39.666990    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:21:40.050205    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:21:40.050221    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:21:40.165006    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:21:40.165028    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:21:40.165042    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:21:40.165049    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:21:40.165899    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:21:40.165911    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:21:41.653048    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 1
	I0805 16:21:41.653066    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:41.653144    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:41.653911    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:41.653968    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:41.653979    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:41.653992    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:41.653998    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:41.654006    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:41.654015    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:41.654030    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:41.654045    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:41.654053    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:41.654061    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:41.654070    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:41.654078    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:41.654093    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:43.655366    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 2
	I0805 16:21:43.655382    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:43.655471    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:43.656243    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:43.656291    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:43.656301    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:43.656319    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:43.656329    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:43.656351    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:43.656362    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:43.656369    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:43.656375    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:43.656391    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:43.656406    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:43.656416    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:43.656423    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:43.656437    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:45.657345    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 3
	I0805 16:21:45.657361    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:45.657459    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:45.658214    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:45.658269    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:45.658278    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:45.658286    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:45.658295    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:45.658310    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:45.658321    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:45.658329    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:45.658337    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:45.658349    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:45.658362    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:45.658370    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:45.658378    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:45.658387    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:45.751756    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:45 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:21:45.751812    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:45 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:21:45.751830    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:45 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:21:45.774801    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:45 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:21:47.659182    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 4
	I0805 16:21:47.659208    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:47.659291    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:47.660062    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:47.660112    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:47.660128    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:47.660137    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:47.660145    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:47.660153    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:47.660162    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:47.660178    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:47.660192    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:47.660204    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:47.660218    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:47.660230    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:47.660240    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:47.660260    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:49.662115    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 5
	I0805 16:21:49.662148    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:49.662310    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:49.663748    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:49.663812    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I0805 16:21:49.663831    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b00c}
	I0805 16:21:49.663846    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found match: a6:1c:88:9c:44:3
	I0805 16:21:49.663856    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | IP: 192.169.0.14
	I0805 16:21:49.663945    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetConfigRaw
	I0805 16:21:49.664855    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:49.665006    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:49.665127    4640 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 16:21:49.665139    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetState
	I0805 16:21:49.665271    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:49.665344    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:49.666326    4640 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 16:21:49.666337    4640 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 16:21:49.666342    4640 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 16:21:49.666348    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.666471    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:49.666603    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.666743    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.666869    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:49.667045    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:49.667279    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:49.667287    4640 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 16:21:49.724369    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:21:49.724382    4640 main.go:141] libmachine: Detecting the provisioner...
	I0805 16:21:49.724388    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.724522    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:49.724626    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.724719    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.724810    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:49.724938    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:49.725087    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:49.725094    4640 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 16:21:49.782403    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 16:21:49.782454    4640 main.go:141] libmachine: found compatible host: buildroot
	I0805 16:21:49.782460    4640 main.go:141] libmachine: Provisioning with buildroot...
	I0805 16:21:49.782466    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:21:49.782595    4640 buildroot.go:166] provisioning hostname "multinode-985000-m02"
	I0805 16:21:49.782606    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:21:49.782698    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.782797    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:49.782871    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.782964    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.783079    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:49.783204    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:49.783350    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:49.783359    4640 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-985000-m02 && echo "multinode-985000-m02" | sudo tee /etc/hostname
	I0805 16:21:49.854175    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-985000-m02
	
	I0805 16:21:49.854190    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.854319    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:49.854421    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.854492    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.854587    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:49.854712    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:49.854870    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:49.854882    4640 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-985000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-985000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-985000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:21:49.917814    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:21:49.917830    4640 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:21:49.917840    4640 buildroot.go:174] setting up certificates
	I0805 16:21:49.917846    4640 provision.go:84] configureAuth start
	I0805 16:21:49.917856    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:21:49.917985    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:21:49.918095    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.918192    4640 provision.go:143] copyHostCerts
	I0805 16:21:49.918223    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:21:49.918280    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:21:49.918285    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:21:49.918411    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:21:49.918617    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:21:49.918652    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:21:49.918658    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:21:49.918733    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:21:49.918888    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:21:49.918922    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:21:49.918927    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:21:49.918994    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:21:49.919145    4640 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.multinode-985000-m02 san=[127.0.0.1 192.169.0.14 localhost minikube multinode-985000-m02]
	I0805 16:21:50.072896    4640 provision.go:177] copyRemoteCerts
	I0805 16:21:50.072947    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:21:50.072962    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:50.073107    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:50.073199    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.073317    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:50.073426    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:21:50.108446    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:21:50.108519    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:21:50.128617    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:21:50.128684    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0805 16:21:50.148653    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:21:50.148720    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:21:50.168682    4640 provision.go:87] duration metric: took 250.828344ms to configureAuth
	I0805 16:21:50.168695    4640 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:21:50.168835    4640 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:21:50.168849    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:50.168993    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:50.169087    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:50.169175    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.169262    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.169345    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:50.169486    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:50.169621    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:50.169628    4640 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:21:50.228062    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:21:50.228074    4640 buildroot.go:70] root file system type: tmpfs
	I0805 16:21:50.228150    4640 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:21:50.228164    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:50.228293    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:50.228388    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.228480    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.228586    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:50.228755    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:50.228888    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:50.228934    4640 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.13"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:21:50.296901    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.13
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:21:50.296919    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:50.297064    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:50.297158    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.297250    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.297333    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:50.297475    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:50.297611    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:50.297624    4640 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:21:51.873922    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:21:51.873940    4640 main.go:141] libmachine: Checking connection to Docker...
	I0805 16:21:51.873964    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetURL
	I0805 16:21:51.874107    4640 main.go:141] libmachine: Docker is up and running!
	I0805 16:21:51.874115    4640 main.go:141] libmachine: Reticulating splines...
	I0805 16:21:51.874120    4640 client.go:171] duration metric: took 12.916447572s to LocalClient.Create
	I0805 16:21:51.874129    4640 start.go:167] duration metric: took 12.916485141s to libmachine.API.Create "multinode-985000"
	I0805 16:21:51.874135    4640 start.go:293] postStartSetup for "multinode-985000-m02" (driver="hyperkit")
	I0805 16:21:51.874142    4640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:21:51.874152    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:51.874292    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:21:51.874313    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:51.874416    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:51.874505    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:51.874583    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:51.874657    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:21:51.915394    4640 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:21:51.919538    4640 command_runner.go:130] > NAME=Buildroot
	I0805 16:21:51.919549    4640 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0805 16:21:51.919553    4640 command_runner.go:130] > ID=buildroot
	I0805 16:21:51.919557    4640 command_runner.go:130] > VERSION_ID=2023.02.9
	I0805 16:21:51.919560    4640 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0805 16:21:51.919635    4640 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:21:51.919645    4640 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:21:51.919746    4640 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:21:51.919897    4640 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:21:51.919903    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:21:51.920070    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:21:51.929531    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:21:51.959146    4640 start.go:296] duration metric: took 85.003807ms for postStartSetup
	I0805 16:21:51.959174    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetConfigRaw
	I0805 16:21:51.959830    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:21:51.959996    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:21:51.960355    4640 start.go:128] duration metric: took 13.03589336s to createHost
	I0805 16:21:51.960370    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:51.960461    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:51.960532    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:51.960607    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:51.960679    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:51.960792    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:51.960921    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:51.960928    4640 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 16:21:52.018527    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722900112.019707412
	
	I0805 16:21:52.018539    4640 fix.go:216] guest clock: 1722900112.019707412
	I0805 16:21:52.018544    4640 fix.go:229] Guest: 2024-08-05 16:21:52.019707412 -0700 PDT Remote: 2024-08-05 16:21:51.960363 -0700 PDT m=+79.692294773 (delta=59.344412ms)
	I0805 16:21:52.018555    4640 fix.go:200] guest clock delta is within tolerance: 59.344412ms
	I0805 16:21:52.018561    4640 start.go:83] releasing machines lock for "multinode-985000-m02", held for 13.094193048s
	I0805 16:21:52.018577    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:52.018703    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:21:52.040117    4640 out.go:177] * Found network options:
	I0805 16:21:52.084887    4640 out.go:177]   - NO_PROXY=192.169.0.13
	W0805 16:21:52.106885    4640 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:21:52.106945    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:52.107811    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:52.108153    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:52.108320    4640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:21:52.108371    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	W0805 16:21:52.108412    4640 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:21:52.108519    4640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 16:21:52.108545    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:52.108628    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:52.108772    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:52.108842    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:52.108951    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:52.109026    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:52.109176    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:52.109197    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:21:52.109323    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:21:52.141829    4640 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0805 16:21:52.141939    4640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:21:52.141993    4640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:21:52.191903    4640 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0805 16:21:52.192466    4640 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0805 16:21:52.192507    4640 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:21:52.192514    4640 start.go:495] detecting cgroup driver to use...
	I0805 16:21:52.192581    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:21:52.208225    4640 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0805 16:21:52.208528    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:21:52.217078    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:21:52.225489    4640 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:21:52.225534    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:21:52.233992    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:21:52.242465    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:21:52.250835    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:21:52.260065    4640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:21:52.268863    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:21:52.277242    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:21:52.285501    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:21:52.293845    4640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:21:52.301185    4640 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0805 16:21:52.301319    4640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:21:52.308881    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:21:52.403323    4640 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:21:52.423722    4640 start.go:495] detecting cgroup driver to use...
	I0805 16:21:52.423794    4640 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:21:52.442557    4640 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0805 16:21:52.443108    4640 command_runner.go:130] > [Unit]
	I0805 16:21:52.443119    4640 command_runner.go:130] > Description=Docker Application Container Engine
	I0805 16:21:52.443124    4640 command_runner.go:130] > Documentation=https://docs.docker.com
	I0805 16:21:52.443128    4640 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0805 16:21:52.443132    4640 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0805 16:21:52.443136    4640 command_runner.go:130] > StartLimitBurst=3
	I0805 16:21:52.443141    4640 command_runner.go:130] > StartLimitIntervalSec=60
	I0805 16:21:52.443147    4640 command_runner.go:130] > [Service]
	I0805 16:21:52.443151    4640 command_runner.go:130] > Type=notify
	I0805 16:21:52.443155    4640 command_runner.go:130] > Restart=on-failure
	I0805 16:21:52.443160    4640 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13
	I0805 16:21:52.443165    4640 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0805 16:21:52.443175    4640 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0805 16:21:52.443182    4640 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0805 16:21:52.443188    4640 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0805 16:21:52.443194    4640 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0805 16:21:52.443200    4640 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0805 16:21:52.443212    4640 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0805 16:21:52.443224    4640 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0805 16:21:52.443231    4640 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0805 16:21:52.443234    4640 command_runner.go:130] > ExecStart=
	I0805 16:21:52.443246    4640 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0805 16:21:52.443250    4640 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0805 16:21:52.443256    4640 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0805 16:21:52.443262    4640 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0805 16:21:52.443265    4640 command_runner.go:130] > LimitNOFILE=infinity
	I0805 16:21:52.443269    4640 command_runner.go:130] > LimitNPROC=infinity
	I0805 16:21:52.443272    4640 command_runner.go:130] > LimitCORE=infinity
	I0805 16:21:52.443277    4640 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0805 16:21:52.443282    4640 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0805 16:21:52.443285    4640 command_runner.go:130] > TasksMax=infinity
	I0805 16:21:52.443290    4640 command_runner.go:130] > TimeoutStartSec=0
	I0805 16:21:52.443296    4640 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0805 16:21:52.443299    4640 command_runner.go:130] > Delegate=yes
	I0805 16:21:52.443304    4640 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0805 16:21:52.443313    4640 command_runner.go:130] > KillMode=process
	I0805 16:21:52.443317    4640 command_runner.go:130] > [Install]
	I0805 16:21:52.443321    4640 command_runner.go:130] > WantedBy=multi-user.target
	I0805 16:21:52.443454    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:21:52.455112    4640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:21:52.472976    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:21:52.485648    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:21:52.496640    4640 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:21:52.520742    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:21:52.532843    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:21:52.547391    4640 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0805 16:21:52.547619    4640 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:21:52.550475    4640 command_runner.go:130] > /usr/bin/cri-dockerd
	I0805 16:21:52.550551    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:21:52.558821    4640 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:21:52.572801    4640 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:21:52.669948    4640 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:21:52.772017    4640 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:21:52.772038    4640 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:21:52.785587    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:21:52.887001    4640 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:22:53.782764    4640 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0805 16:22:53.782779    4640 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0805 16:22:53.782788    4640 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.895755367s)
	I0805 16:22:53.782849    4640 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0805 16:22:53.791796    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0805 16:22:53.791808    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578059613Z" level=info msg="Starting up"
	I0805 16:22:53.791820    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578746899Z" level=info msg="containerd not running, starting managed containerd"
	I0805 16:22:53.791833    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.579364099Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	I0805 16:22:53.791843    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.597194743Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0805 16:22:53.791853    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613422882Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0805 16:22:53.791865    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613448264Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0805 16:22:53.791875    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613527396Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0805 16:22:53.791884    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613540484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791897    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613598776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.791906    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613664323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791924    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613844698Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.791936    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613881896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791948    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613894727Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.791957    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613902000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791967    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614005875Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791976    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614259691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791991    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615867073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.792000    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615974584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.792024    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616138996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.792033    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616172823Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0805 16:22:53.792042    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616291383Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0805 16:22:53.792050    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616398312Z" level=info msg="metadata content store policy set" policy=shared
	I0805 16:22:53.792059    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.618998610Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0805 16:22:53.792068    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619065338Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0805 16:22:53.792076    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619081703Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0805 16:22:53.792085    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619092273Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0805 16:22:53.792094    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619101426Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0805 16:22:53.792103    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619164798Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0805 16:22:53.792113    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619370752Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0805 16:22:53.792121    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619460644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0805 16:22:53.792129    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619495461Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0805 16:22:53.792138    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619506581Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0805 16:22:53.792148    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619515758Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792158    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619524383Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792170    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619532546Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792178    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619541391Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792187    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619550990Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792197    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619565508Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792266    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619576616Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792278    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619584035Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792291    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619598072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792299    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619608190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792307    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619616319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792316    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619625389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792326    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619634123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792335    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619648148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792344    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619658942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792353    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619667668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792362    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619676302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792371    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619686416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792380    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619694011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792388    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619701566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792397    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619709342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792406    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619719250Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0805 16:22:53.792415    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619733203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792423    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619741785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792432    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619749153Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0805 16:22:53.792442    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619797467Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0805 16:22:53.792454    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619811479Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0805 16:22:53.792467    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619819137Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0805 16:22:53.792661    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619826861Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0805 16:22:53.792673    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619833500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792682    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619841896Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0805 16:22:53.792690    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619852419Z" level=info msg="NRI interface is disabled by configuration."
	I0805 16:22:53.792702    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620071162Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0805 16:22:53.792710    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620124755Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0805 16:22:53.792718    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620155079Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0805 16:22:53.792725    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620168148Z" level=info msg="containerd successfully booted in 0.023750s"
	I0805 16:22:53.792734    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.639692405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0805 16:22:53.792741    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.644102102Z" level=info msg="Loading containers: start."
	I0805 16:22:53.792763    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.740540264Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0805 16:22:53.792774    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.826229634Z" level=info msg="Loading containers: done."
	I0805 16:22:53.792783    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843276878Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	I0805 16:22:53.792792    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843375843Z" level=info msg="Daemon has completed initialization"
	I0805 16:22:53.792800    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869275976Z" level=info msg="API listen on /var/run/docker.sock"
	I0805 16:22:53.792807    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869434474Z" level=info msg="API listen on [::]:2376"
	I0805 16:22:53.792813    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 systemd[1]: Started Docker Application Container Engine.
	I0805 16:22:53.792821    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.919662359Z" level=info msg="Processing signal 'terminated'"
	I0805 16:22:53.792829    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920773928Z" level=info msg="Daemon shutdown complete"
	I0805 16:22:53.792840    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920792538Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0805 16:22:53.792852    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920845272Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0805 16:22:53.792861    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920858866Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0805 16:22:53.792868    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0805 16:22:53.792874    4640 command_runner.go:130] > Aug 05 23:21:53 multinode-985000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0805 16:22:53.792904    4640 command_runner.go:130] > Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0805 16:22:53.792911    4640 command_runner.go:130] > Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0805 16:22:53.792918    4640 command_runner.go:130] > Aug 05 23:21:53 multinode-985000-m02 dockerd[923]: time="2024-08-05T23:21:53.957339969Z" level=info msg="Starting up"
	I0805 16:22:53.792929    4640 command_runner.go:130] > Aug 05 23:22:53 multinode-985000-m02 dockerd[923]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0805 16:22:53.792940    4640 command_runner.go:130] > Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0805 16:22:53.792946    4640 command_runner.go:130] > Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0805 16:22:53.792952    4640 command_runner.go:130] > Aug 05 23:22:53 multinode-985000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0805 16:22:53.817223    4640 out.go:177] 
	W0805 16:22:53.838182    4640 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 05 23:21:50 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578059613Z" level=info msg="Starting up"
	Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578746899Z" level=info msg="containerd not running, starting managed containerd"
	Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.579364099Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.597194743Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613422882Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613448264Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613527396Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613540484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613598776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613664323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613844698Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613881896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613894727Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613902000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614005875Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614259691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615867073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615974584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616138996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616172823Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616291383Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616398312Z" level=info msg="metadata content store policy set" policy=shared
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.618998610Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619065338Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619081703Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619092273Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619101426Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619164798Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619370752Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619460644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619495461Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619506581Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619515758Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619524383Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619532546Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619541391Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619550990Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619565508Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619576616Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619584035Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619598072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619608190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619616319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619625389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619634123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619648148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619658942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619667668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619676302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619686416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619694011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619701566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619709342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619719250Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619733203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619741785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619749153Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619797467Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619811479Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619819137Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619826861Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619833500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619841896Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619852419Z" level=info msg="NRI interface is disabled by configuration."
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620071162Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620124755Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620155079Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620168148Z" level=info msg="containerd successfully booted in 0.023750s"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.639692405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.644102102Z" level=info msg="Loading containers: start."
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.740540264Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.826229634Z" level=info msg="Loading containers: done."
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843276878Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843375843Z" level=info msg="Daemon has completed initialization"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869275976Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869434474Z" level=info msg="API listen on [::]:2376"
	Aug 05 23:21:51 multinode-985000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.919662359Z" level=info msg="Processing signal 'terminated'"
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920773928Z" level=info msg="Daemon shutdown complete"
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920792538Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920845272Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920858866Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 05 23:21:52 multinode-985000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 05 23:21:53 multinode-985000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:21:53 multinode-985000-m02 dockerd[923]: time="2024-08-05T23:21:53.957339969Z" level=info msg="Starting up"
	Aug 05 23:22:53 multinode-985000-m02 dockerd[923]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 05 23:22:53 multinode-985000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0805 16:22:53.838301    4640 out.go:239] * 
	W0805 16:22:53.839537    4640 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:22:53.901092    4640 out.go:177] 
	
	
	==> Docker <==
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.538240622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.545949341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.546006859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.546094356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.546213245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 cri-dockerd[1167]: time="2024-08-05T23:21:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2a8cd74365e92f179bb6ee1ce28c9364c192d2bf64c54e8b18c5339cfbdf5dcd/resolv.conf as [nameserver 192.169.0.1]"
	Aug 05 23:21:36 multinode-985000 cri-dockerd[1167]: time="2024-08-05T23:21:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/35b9ac42edc06af57c697463456d60a00f8d9d12849ef967af1e639bc238e3b3/resolv.conf as [nameserver 192.169.0.1]"
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.715025205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.715620680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.716022138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.717088853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.755323726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.755409641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.755418837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.764703174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:22:57 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:57.493861515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:22:57 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:57.493963422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:22:57 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:57.494329548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:22:57 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:57.494770138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:22:57 multinode-985000 cri-dockerd[1167]: time="2024-08-05T23:22:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/abfb33d4f204dd0b2a7ffc533336cce5539144674b64125ac7373b0be8961559/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 05 23:22:58 multinode-985000 cri-dockerd[1167]: time="2024-08-05T23:22:58Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Aug 05 23:22:58 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:58.841390849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:22:58 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:58.841491056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:22:58 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:58.841532145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:22:58 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:58.841640743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0cbc162071e51       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   abfb33d4f204d       busybox-fc5497c4f-44k5g
	c9365aec33892       cbb01a7bd410d                                                                                         13 minutes ago      Running             coredns                   0                   35b9ac42edc06       coredns-7db6d8ff4d-fqtll
	3d9fd612d0b14       6e38f40d628db                                                                                         13 minutes ago      Running             storage-provisioner       0                   2a8cd74365e92       storage-provisioner
	724e5cfab0a27       kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3              13 minutes ago      Running             kindnet-cni               0                   65a1122097f07       kindnet-tvtvg
	d58ca48f9f8b2       55bb025d2cfa5                                                                                         13 minutes ago      Running             kube-proxy                0                   c91338eb0e138       kube-proxy-fwgw7
	792feba1a6f6b       3edc18e7b7672                                                                                         14 minutes ago      Running             kube-scheduler            0                   c86e04eb7823b       kube-scheduler-multinode-985000
	1fdd85b796ab3       3861cfcd7c04c                                                                                         14 minutes ago      Running             etcd                      0                   b58900db52990       etcd-multinode-985000
	d11865076c645       76932a3b37d7e                                                                                         14 minutes ago      Running             kube-controller-manager   0                   55a20063845e3       kube-controller-manager-multinode-985000
	608878b33f358       1f6d574d502f3                                                                                         14 minutes ago      Running             kube-apiserver            0                   569788c2699f1       kube-apiserver-multinode-985000
	
	
	==> coredns [c9365aec3389] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57821 - 19682 "HINFO IN 7732396596932693360.4385804994640298901. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.014623104s
	[INFO] 10.244.0.3:44234 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136193s
	[INFO] 10.244.0.3:37423 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.058799401s
	[INFO] 10.244.0.3:57961 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.010090318s
	[INFO] 10.244.0.3:37799 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.012765436s
	[INFO] 10.244.0.3:46499 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000078364s
	[INFO] 10.244.0.3:42436 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011216992s
	[INFO] 10.244.0.3:35880 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000144767s
	[INFO] 10.244.0.3:39224 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104006s
	[INFO] 10.244.0.3:48536 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.013324615s
	[INFO] 10.244.0.3:55841 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000221823s
	[INFO] 10.244.0.3:46712 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000111417s
	[INFO] 10.244.0.3:51982 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099744s
	[INFO] 10.244.0.3:55425 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080184s
	[INFO] 10.244.0.3:58084 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119904s
	[INFO] 10.244.0.3:57892 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049065s
	[INFO] 10.244.0.3:52329 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000049128s
	[INFO] 10.244.0.3:60384 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000083319s
	[INFO] 10.244.0.3:51923 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000058598s
	[INFO] 10.244.0.3:37985 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00007256s
	[INFO] 10.244.0.3:45792 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000071025s
	
	
	==> describe nodes <==
	Name:               multinode-985000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-985000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=multinode-985000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T16_21_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:21:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-985000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:35:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:33:23 +0000   Mon, 05 Aug 2024 23:21:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:33:23 +0000   Mon, 05 Aug 2024 23:21:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:33:23 +0000   Mon, 05 Aug 2024 23:21:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:33:23 +0000   Mon, 05 Aug 2024 23:21:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.13
	  Hostname:    multinode-985000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 43d0d80c8ac846e58ac4351481e2a76f
	  System UUID:                3ac6443b-0000-0000-898d-9b152fa64288
	  Boot ID:                    382df761-aca3-4a9d-bdce-655bf0444398
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-44k5g                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-7db6d8ff4d-fqtll                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-985000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-tvtvg                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-985000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-985000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-fwgw7                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-985000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node multinode-985000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node multinode-985000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node multinode-985000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node multinode-985000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node multinode-985000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node multinode-985000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node multinode-985000 event: Registered Node multinode-985000 in Controller
	  Normal  NodeReady                13m                kubelet          Node multinode-985000 status is now: NodeReady
	
	
	Name:               multinode-985000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-985000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=multinode-985000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T16_34_49_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:34:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-985000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:35:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:35:11 +0000   Mon, 05 Aug 2024 23:34:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:35:11 +0000   Mon, 05 Aug 2024 23:34:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:35:11 +0000   Mon, 05 Aug 2024 23:34:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:35:11 +0000   Mon, 05 Aug 2024 23:35:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.15
	  Hostname:    multinode-985000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 826016b56497466499a1ccf530c0b20a
	  System UUID:                f79c425f-0000-0000-b959-1b18fd31916b
	  Boot ID:                    e2b098c4-c586-45f3-bd88-3d2d31770824
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ptd5b    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-5kfjr              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26s
	  kube-system                 kube-proxy-s65dd           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20s                kube-proxy       
	  Normal  NodeHasSufficientMemory  27s (x2 over 27s)  kubelet          Node multinode-985000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s (x2 over 27s)  kubelet          Node multinode-985000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x2 over 27s)  kubelet          Node multinode-985000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           26s                node-controller  Node multinode-985000-m03 event: Registered Node multinode-985000-m03 in Controller
	  Normal  NodeReady                4s                 kubelet          Node multinode-985000-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.261909] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.788416] systemd-fstab-generator[490]: Ignoring "noauto" option for root device
	[  +0.099076] systemd-fstab-generator[502]: Ignoring "noauto" option for root device
	[  +1.730104] systemd-fstab-generator[841]: Ignoring "noauto" option for root device
	[  +0.293514] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.050985] kauditd_printk_skb: 95 callbacks suppressed
	[  +0.056812] systemd-fstab-generator[892]: Ignoring "noauto" option for root device
	[  +0.126132] systemd-fstab-generator[906]: Ignoring "noauto" option for root device
	[  +2.458612] systemd-fstab-generator[1120]: Ignoring "noauto" option for root device
	[  +0.104830] systemd-fstab-generator[1132]: Ignoring "noauto" option for root device
	[  +0.110549] systemd-fstab-generator[1144]: Ignoring "noauto" option for root device
	[  +0.128910] systemd-fstab-generator[1159]: Ignoring "noauto" option for root device
	[  +3.841948] systemd-fstab-generator[1259]: Ignoring "noauto" option for root device
	[  +0.049995] kauditd_printk_skb: 180 callbacks suppressed
	[  +2.575866] systemd-fstab-generator[1508]: Ignoring "noauto" option for root device
	[  +3.513702] systemd-fstab-generator[1689]: Ignoring "noauto" option for root device
	[  +0.052965] kauditd_printk_skb: 70 callbacks suppressed
	[Aug 5 23:21] systemd-fstab-generator[2095]: Ignoring "noauto" option for root device
	[  +0.093506] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.997559] systemd-fstab-generator[2287]: Ignoring "noauto" option for root device
	[  +0.103967] kauditd_printk_skb: 12 callbacks suppressed
	[ +16.210215] kauditd_printk_skb: 60 callbacks suppressed
	[Aug 5 23:22] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [1fdd85b796ab] <==
	{"level":"info","ts":"2024-08-05T23:21:02.190598Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T23:21:02.190621Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T23:21:02.179152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 switched to configuration voters=(16152458731666035825)"}
	{"level":"info","ts":"2024-08-05T23:21:02.190761Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","added-peer-id":"e0290fa3161c5471","added-peer-peer-urls":["https://192.169.0.13:2380"]}
	{"level":"info","ts":"2024-08-05T23:21:02.845352Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-05T23:21:02.84543Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-05T23:21:02.845462Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgPreVoteResp from e0290fa3161c5471 at term 1"}
	{"level":"info","ts":"2024-08-05T23:21:02.845512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became candidate at term 2"}
	{"level":"info","ts":"2024-08-05T23:21:02.845532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgVoteResp from e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-08-05T23:21:02.845548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became leader at term 2"}
	{"level":"info","ts":"2024-08-05T23:21:02.845562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e0290fa3161c5471 elected leader e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-08-05T23:21:02.849595Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.851787Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e0290fa3161c5471","local-member-attributes":"{Name:multinode-985000 ClientURLs:[https://192.169.0.13:2379]}","request-path":"/0/members/e0290fa3161c5471/attributes","cluster-id":"87b46e718846f146","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T23:21:02.852037Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:21:02.855611Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	{"level":"info","ts":"2024-08-05T23:21:02.856003Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.856059Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.85615Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.863221Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:21:02.86336Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T23:21:02.863406Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T23:21:02.864495Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-05T23:31:02.914901Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":684}
	{"level":"info","ts":"2024-08-05T23:31:02.918154Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":684,"took":"2.558785ms","hash":2682644219,"current-db-size-bytes":2088960,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2088960,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-08-05T23:31:02.918199Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2682644219,"revision":684,"compact-revision":-1}
	
	
	==> kernel <==
	 23:35:15 up 14 min,  0 users,  load average: 0.45, 0.18, 0.11
	Linux multinode-985000 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [724e5cfab0a2] <==
	I0805 23:33:54.988562       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:33:54.988724       1 main.go:299] handling current node
	I0805 23:34:04.990678       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:34:04.991047       1 main.go:299] handling current node
	I0805 23:34:14.989462       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:34:14.989592       1 main.go:299] handling current node
	I0805 23:34:24.989135       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:34:24.989269       1 main.go:299] handling current node
	I0805 23:34:34.997631       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:34:34.997789       1 main.go:299] handling current node
	I0805 23:34:44.997368       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:34:44.997416       1 main.go:299] handling current node
	I0805 23:34:54.992568       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:34:54.992629       1 main.go:299] handling current node
	I0805 23:34:54.992643       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:34:54.992648       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.1.0/24] 
	I0805 23:34:54.992876       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.169.0.15 Flags: [] Table: 0} 
	I0805 23:35:04.990312       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:35:04.990398       1 main.go:299] handling current node
	I0805 23:35:04.990506       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:35:04.990544       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.1.0/24] 
	I0805 23:35:14.988650       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:35:14.988669       1 main.go:299] handling current node
	I0805 23:35:14.988679       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:35:14.988682       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [608878b33f35] <==
	I0805 23:21:04.097032       1 aggregator.go:165] initial CRD sync complete...
	I0805 23:21:04.097038       1 autoregister_controller.go:141] Starting autoregister controller
	I0805 23:21:04.097041       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0805 23:21:04.097046       1 cache.go:39] Caches are synced for autoregister controller
	I0805 23:21:04.110976       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0805 23:21:04.964782       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0805 23:21:04.969492       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0805 23:21:04.969592       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0805 23:21:05.293407       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0805 23:21:05.318630       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0805 23:21:05.372930       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0805 23:21:05.377089       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.13]
	I0805 23:21:05.377814       1 controller.go:615] quota admission added evaluator for: endpoints
	I0805 23:21:05.381896       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0805 23:21:06.014220       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0805 23:21:06.529594       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0805 23:21:06.534785       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0805 23:21:06.541889       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0805 23:21:20.069451       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0805 23:21:20.168118       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0805 23:34:22.712021       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52583: use of closed network connection
	E0805 23:34:23.040370       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52588: use of closed network connection
	E0805 23:34:23.352264       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52593: use of closed network connection
	E0805 23:34:26.444399       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52624: use of closed network connection
	E0805 23:34:26.631411       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52626: use of closed network connection
	
	
	==> kube-controller-manager [d11865076c64] <==
	I0805 23:21:20.453666       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.448745ms"
	I0805 23:21:20.454853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="1.144243ms"
	I0805 23:21:20.787054       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="47.481389ms"
	I0805 23:21:20.817469       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.368774ms"
	I0805 23:21:20.817550       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="43.975µs"
	I0805 23:21:35.878200       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="31.077µs"
	I0805 23:21:35.888778       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.967µs"
	I0805 23:21:37.680305       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="64.353µs"
	I0805 23:21:37.699191       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="7.51419ms"
	I0805 23:21:37.699276       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.856µs"
	I0805 23:21:39.419986       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0805 23:22:57.139604       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.652844ms"
	I0805 23:22:57.152479       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.645403ms"
	I0805 23:22:57.161837       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.312944ms"
	I0805 23:22:57.161913       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.986µs"
	I0805 23:22:59.131878       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.268042ms"
	I0805 23:22:59.132399       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.529µs"
	I0805 23:34:49.118620       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-985000-m03\" does not exist"
	I0805 23:34:49.123685       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-985000-m03" podCIDRs=["10.244.1.0/24"]
	I0805 23:34:49.553799       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-985000-m03"
	I0805 23:35:12.244278       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-985000-m03"
	I0805 23:35:12.252224       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.969µs"
	I0805 23:35:12.259725       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.754µs"
	I0805 23:35:14.267796       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.716009ms"
	I0805 23:35:14.267862       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.069µs"
	
	
	==> kube-proxy [d58ca48f9f8b] <==
	I0805 23:21:21.029929       1 server_linux.go:69] "Using iptables proxy"
	I0805 23:21:21.072929       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.13"]
	I0805 23:21:21.105532       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 23:21:21.105552       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 23:21:21.105563       1 server_linux.go:165] "Using iptables Proxier"
	I0805 23:21:21.107493       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 23:21:21.107594       1 server.go:872] "Version info" version="v1.30.3"
	I0805 23:21:21.107602       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:21:21.108477       1 config.go:192] "Starting service config controller"
	I0805 23:21:21.108482       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 23:21:21.108492       1 config.go:101] "Starting endpoint slice config controller"
	I0805 23:21:21.108494       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 23:21:21.108784       1 config.go:319] "Starting node config controller"
	I0805 23:21:21.108789       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 23:21:21.209420       1 shared_informer.go:320] Caches are synced for node config
	I0805 23:21:21.209474       1 shared_informer.go:320] Caches are synced for service config
	I0805 23:21:21.209501       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [792feba1a6f6] <==
	E0805 23:21:04.024310       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:04.024229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0805 23:21:04.024017       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 23:21:04.024329       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 23:21:04.024047       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 23:21:04.024362       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0805 23:21:04.024118       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 23:21:04.024431       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0805 23:21:04.860871       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:04.861069       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0805 23:21:04.959895       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0805 23:21:04.959949       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0805 23:21:04.962444       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 23:21:04.962496       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0805 23:21:04.968410       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 23:21:04.968452       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 23:21:05.030527       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0805 23:21:05.030566       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 23:21:05.076451       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:05.076659       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0805 23:21:05.118159       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:05.118676       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0805 23:21:05.141945       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 23:21:05.142020       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0805 23:21:08.218627       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 05 23:31:06 multinode-985000 kubelet[2102]: E0805 23:31:06.388949    2102 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:31:06 multinode-985000 kubelet[2102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:31:06 multinode-985000 kubelet[2102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:31:06 multinode-985000 kubelet[2102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:31:06 multinode-985000 kubelet[2102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:32:06 multinode-985000 kubelet[2102]: E0805 23:32:06.388091    2102 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:32:06 multinode-985000 kubelet[2102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:32:06 multinode-985000 kubelet[2102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:32:06 multinode-985000 kubelet[2102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:32:06 multinode-985000 kubelet[2102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:33:06 multinode-985000 kubelet[2102]: E0805 23:33:06.388876    2102 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:33:06 multinode-985000 kubelet[2102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:33:06 multinode-985000 kubelet[2102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:33:06 multinode-985000 kubelet[2102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:33:06 multinode-985000 kubelet[2102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:34:06 multinode-985000 kubelet[2102]: E0805 23:34:06.388016    2102 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:34:06 multinode-985000 kubelet[2102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:34:06 multinode-985000 kubelet[2102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:34:06 multinode-985000 kubelet[2102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:34:06 multinode-985000 kubelet[2102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:35:06 multinode-985000 kubelet[2102]: E0805 23:35:06.389737    2102 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:35:06 multinode-985000 kubelet[2102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:35:06 multinode-985000 kubelet[2102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:35:06 multinode-985000 kubelet[2102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:35:06 multinode-985000 kubelet[2102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-985000 -n multinode-985000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-985000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/AddNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/AddNode (47.59s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (2.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-985000 status --output json --alsologtostderr: exit status 2 (307.142329ms)

                                                
                                                
-- stdout --
	[{"Name":"multinode-985000","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"multinode-985000-m02","Host":"Running","Kubelet":"Stopped","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true},{"Name":"multinode-985000-m03","Host":"Running","Kubelet":"Running","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:35:17.033821    5313 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:35:17.034093    5313 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:35:17.034098    5313 out.go:304] Setting ErrFile to fd 2...
	I0805 16:35:17.034102    5313 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:35:17.034277    5313 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:35:17.034457    5313 out.go:298] Setting JSON to true
	I0805 16:35:17.034479    5313 mustload.go:65] Loading cluster: multinode-985000
	I0805 16:35:17.034517    5313 notify.go:220] Checking for updates...
	I0805 16:35:17.034785    5313 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:35:17.034801    5313 status.go:255] checking status of multinode-985000 ...
	I0805 16:35:17.035196    5313 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:35:17.035243    5313 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:35:17.043801    5313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52748
	I0805 16:35:17.044142    5313 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:35:17.044550    5313 main.go:141] libmachine: Using API Version  1
	I0805 16:35:17.044562    5313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:35:17.044770    5313 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:35:17.044881    5313 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:35:17.044962    5313 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:35:17.045027    5313 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:35:17.046014    5313 status.go:330] multinode-985000 host status = "Running" (err=<nil>)
	I0805 16:35:17.046038    5313 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:35:17.046274    5313 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:35:17.046293    5313 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:35:17.054515    5313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52750
	I0805 16:35:17.054850    5313 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:35:17.055200    5313 main.go:141] libmachine: Using API Version  1
	I0805 16:35:17.055217    5313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:35:17.055411    5313 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:35:17.055519    5313 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:35:17.055609    5313 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:35:17.055857    5313 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:35:17.055878    5313 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:35:17.066452    5313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52752
	I0805 16:35:17.066795    5313 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:35:17.067102    5313 main.go:141] libmachine: Using API Version  1
	I0805 16:35:17.067113    5313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:35:17.067307    5313 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:35:17.067415    5313 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:35:17.067554    5313 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:35:17.067576    5313 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:35:17.067661    5313 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:35:17.067747    5313 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:35:17.067838    5313 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:35:17.067926    5313 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:35:17.098237    5313 ssh_runner.go:195] Run: systemctl --version
	I0805 16:35:17.102582    5313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:35:17.114098    5313 kubeconfig.go:125] found "multinode-985000" server: "https://192.169.0.13:8443"
	I0805 16:35:17.114124    5313 api_server.go:166] Checking apiserver status ...
	I0805 16:35:17.114162    5313 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:35:17.127310    5313 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1977/cgroup
	W0805 16:35:17.134688    5313 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1977/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 16:35:17.134731    5313 ssh_runner.go:195] Run: ls
	I0805 16:35:17.137812    5313 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:35:17.140826    5313 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0805 16:35:17.140837    5313 status.go:422] multinode-985000 apiserver status = Running (err=<nil>)
	I0805 16:35:17.140852    5313 status.go:257] multinode-985000 status: &{Name:multinode-985000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:35:17.140865    5313 status.go:255] checking status of multinode-985000-m02 ...
	I0805 16:35:17.141113    5313 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:35:17.141132    5313 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:35:17.149908    5313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52756
	I0805 16:35:17.150231    5313 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:35:17.150553    5313 main.go:141] libmachine: Using API Version  1
	I0805 16:35:17.150568    5313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:35:17.150777    5313 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:35:17.150891    5313 main.go:141] libmachine: (multinode-985000-m02) Calling .GetState
	I0805 16:35:17.150975    5313 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:35:17.151045    5313 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:35:17.152010    5313 status.go:330] multinode-985000-m02 host status = "Running" (err=<nil>)
	I0805 16:35:17.152020    5313 host.go:66] Checking if "multinode-985000-m02" exists ...
	I0805 16:35:17.152298    5313 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:35:17.152319    5313 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:35:17.160728    5313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52758
	I0805 16:35:17.161068    5313 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:35:17.161395    5313 main.go:141] libmachine: Using API Version  1
	I0805 16:35:17.161403    5313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:35:17.161599    5313 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:35:17.161707    5313 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:35:17.161805    5313 host.go:66] Checking if "multinode-985000-m02" exists ...
	I0805 16:35:17.162051    5313 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:35:17.162073    5313 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:35:17.170402    5313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52760
	I0805 16:35:17.170745    5313 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:35:17.171109    5313 main.go:141] libmachine: Using API Version  1
	I0805 16:35:17.171125    5313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:35:17.171323    5313 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:35:17.171435    5313 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:35:17.171561    5313 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:35:17.171572    5313 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:35:17.171657    5313 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:35:17.171745    5313 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:35:17.171827    5313 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:35:17.171901    5313 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:35:17.204360    5313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:35:17.215667    5313 status.go:257] multinode-985000-m02 status: &{Name:multinode-985000-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:35:17.215683    5313 status.go:255] checking status of multinode-985000-m03 ...
	I0805 16:35:17.215984    5313 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:35:17.216019    5313 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:35:17.224472    5313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52763
	I0805 16:35:17.224800    5313 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:35:17.225140    5313 main.go:141] libmachine: Using API Version  1
	I0805 16:35:17.225151    5313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:35:17.225385    5313 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:35:17.225495    5313 main.go:141] libmachine: (multinode-985000-m03) Calling .GetState
	I0805 16:35:17.225578    5313 main.go:141] libmachine: (multinode-985000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:35:17.225651    5313 main.go:141] libmachine: (multinode-985000-m03) DBG | hyperkit pid from json: 5266
	I0805 16:35:17.226641    5313 status.go:330] multinode-985000-m03 host status = "Running" (err=<nil>)
	I0805 16:35:17.226649    5313 host.go:66] Checking if "multinode-985000-m03" exists ...
	I0805 16:35:17.226909    5313 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:35:17.226935    5313 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:35:17.235393    5313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52765
	I0805 16:35:17.235724    5313 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:35:17.236092    5313 main.go:141] libmachine: Using API Version  1
	I0805 16:35:17.236109    5313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:35:17.236295    5313 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:35:17.236403    5313 main.go:141] libmachine: (multinode-985000-m03) Calling .GetIP
	I0805 16:35:17.236476    5313 host.go:66] Checking if "multinode-985000-m03" exists ...
	I0805 16:35:17.236737    5313 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:35:17.236760    5313 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:35:17.245180    5313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52767
	I0805 16:35:17.245540    5313 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:35:17.245871    5313 main.go:141] libmachine: Using API Version  1
	I0805 16:35:17.245888    5313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:35:17.246092    5313 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:35:17.246192    5313 main.go:141] libmachine: (multinode-985000-m03) Calling .DriverName
	I0805 16:35:17.246316    5313 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:35:17.246328    5313 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHHostname
	I0805 16:35:17.246405    5313 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHPort
	I0805 16:35:17.246480    5313 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHKeyPath
	I0805 16:35:17.246549    5313 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHUsername
	I0805 16:35:17.246615    5313 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m03/id_rsa Username:docker}
	I0805 16:35:17.274903    5313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:35:17.286255    5313 status.go:257] multinode-985000-m03 status: &{Name:multinode-985000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:186: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-985000 status --output json --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000
helpers_test.go:244: <<< TestMultiNode/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/CopyFile]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-985000 logs -n 25: (1.902276724s)
helpers_test.go:252: TestMultiNode/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| start   | -p multinode-985000                               | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:20 PDT |                     |
	|         | --wait=true --memory=2200                         |                  |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                  |         |         |                     |                     |
	|         | --alsologtostderr                                 |                  |         |         |                     |                     |
	|         | --driver=hyperkit                                 |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- apply -f                   | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:22 PDT | 05 Aug 24 16:22 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- rollout                    | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:22 PDT |                     |
	|         | status deployment/busybox                         |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:32 PDT | 05 Aug 24 16:32 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g --                        |                  |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT |                     |
	|         | busybox-fc5497c4f-ptd5b --                        |                  |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g --                        |                  |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT |                     |
	|         | busybox-fc5497c4f-ptd5b --                        |                  |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g -- nslookup               |                  |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT |                     |
	|         | busybox-fc5497c4f-ptd5b -- nslookup               |                  |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g                           |                  |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g -- sh                     |                  |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1                          |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT |                     |
	|         | busybox-fc5497c4f-ptd5b                           |                  |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |         |         |                     |                     |
	| node    | add -p multinode-985000 -v 3                      | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:35 PDT |
	|         | --alsologtostderr                                 |                  |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 16:20:32
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 16:20:32.303800    4640 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:20:32.303980    4640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:20:32.303986    4640 out.go:304] Setting ErrFile to fd 2...
	I0805 16:20:32.303990    4640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:20:32.304163    4640 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:20:32.305609    4640 out.go:298] Setting JSON to false
	I0805 16:20:32.329307    4640 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3003,"bootTime":1722897029,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0805 16:20:32.329400    4640 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:20:32.351877    4640 out.go:177] * [multinode-985000] minikube v1.33.1 on Darwin 14.5
	I0805 16:20:32.392940    4640 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:20:32.393020    4640 notify.go:220] Checking for updates...
	I0805 16:20:32.435775    4640 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:20:32.456783    4640 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0805 16:20:32.477872    4640 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:20:32.499010    4640 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:20:32.519936    4640 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:20:32.541363    4640 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:20:32.571784    4640 out.go:177] * Using the hyperkit driver based on user configuration
	I0805 16:20:32.613992    4640 start.go:297] selected driver: hyperkit
	I0805 16:20:32.614020    4640 start.go:901] validating driver "hyperkit" against <nil>
	I0805 16:20:32.614042    4640 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:20:32.618322    4640 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:20:32.618456    4640 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19373-1122/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0805 16:20:32.627075    4640 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0805 16:20:32.631391    4640 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:20:32.631417    4640 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0805 16:20:32.631452    4640 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 16:20:32.631678    4640 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:20:32.631709    4640 cni.go:84] Creating CNI manager for ""
	I0805 16:20:32.631719    4640 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0805 16:20:32.631730    4640 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0805 16:20:32.631823    4640 start.go:340] cluster config:
	{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:20:32.631925    4640 iso.go:125] acquiring lock: {Name:mk71e8d40232ece83c91dc82184f03ab93aee56e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:20:32.673756    4640 out.go:177] * Starting "multinode-985000" primary control-plane node in "multinode-985000" cluster
	I0805 16:20:32.695001    4640 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:20:32.695088    4640 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0805 16:20:32.695107    4640 cache.go:56] Caching tarball of preloaded images
	I0805 16:20:32.695319    4640 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:20:32.695338    4640 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:20:32.695809    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:20:32.695848    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json: {Name:mk470c2e849a0c86ee251e86e74d9f6dfdb47dad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:32.696485    4640 start.go:360] acquireMachinesLock for multinode-985000: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:20:32.696593    4640 start.go:364] duration metric: took 88.666µs to acquireMachinesLock for "multinode-985000"
	I0805 16:20:32.696646    4640 start.go:93] Provisioning new machine with config: &{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:20:32.696745    4640 start.go:125] createHost starting for "" (driver="hyperkit")
	I0805 16:20:32.718059    4640 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 16:20:32.718351    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:20:32.718416    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:20:32.728195    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52477
	I0805 16:20:32.728547    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:20:32.728938    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:20:32.728948    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:20:32.729147    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:20:32.729251    4640 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:20:32.729369    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:32.729498    4640 start.go:159] libmachine.API.Create for "multinode-985000" (driver="hyperkit")
	I0805 16:20:32.729521    4640 client.go:168] LocalClient.Create starting
	I0805 16:20:32.729556    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem
	I0805 16:20:32.729608    4640 main.go:141] libmachine: Decoding PEM data...
	I0805 16:20:32.729625    4640 main.go:141] libmachine: Parsing certificate...
	I0805 16:20:32.729685    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem
	I0805 16:20:32.729724    4640 main.go:141] libmachine: Decoding PEM data...
	I0805 16:20:32.729737    4640 main.go:141] libmachine: Parsing certificate...
	I0805 16:20:32.729749    4640 main.go:141] libmachine: Running pre-create checks...
	I0805 16:20:32.729760    4640 main.go:141] libmachine: (multinode-985000) Calling .PreCreateCheck
	I0805 16:20:32.729840    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:32.729974    4640 main.go:141] libmachine: (multinode-985000) Calling .GetConfigRaw
	I0805 16:20:32.739224    4640 main.go:141] libmachine: Creating machine...
	I0805 16:20:32.739247    4640 main.go:141] libmachine: (multinode-985000) Calling .Create
	I0805 16:20:32.739475    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:32.739754    4640 main.go:141] libmachine: (multinode-985000) DBG | I0805 16:20:32.739457    4648 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:20:32.739852    4640 main.go:141] libmachine: (multinode-985000) Downloading /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1122/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 16:20:32.920622    4640 main.go:141] libmachine: (multinode-985000) DBG | I0805 16:20:32.920524    4648 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa...
	I0805 16:20:32.957084    4640 main.go:141] libmachine: (multinode-985000) DBG | I0805 16:20:32.957005    4648 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/multinode-985000.rawdisk...
	I0805 16:20:32.957123    4640 main.go:141] libmachine: (multinode-985000) DBG | Writing magic tar header
	I0805 16:20:32.957134    4640 main.go:141] libmachine: (multinode-985000) DBG | Writing SSH key tar header
	I0805 16:20:32.957531    4640 main.go:141] libmachine: (multinode-985000) DBG | I0805 16:20:32.957490    4648 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000 ...
	I0805 16:20:33.331110    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:33.331140    4640 main.go:141] libmachine: (multinode-985000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid
	I0805 16:20:33.331159    4640 main.go:141] libmachine: (multinode-985000) DBG | Using UUID 3ac698fc-f622-443b-898d-9b152fa64288
	I0805 16:20:33.442582    4640 main.go:141] libmachine: (multinode-985000) DBG | Generated MAC e2:6:14:d2:13:ae
	I0805 16:20:33.442603    4640 main.go:141] libmachine: (multinode-985000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000
	I0805 16:20:33.442636    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3ac698fc-f622-443b-898d-9b152fa64288", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0805 16:20:33.442669    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3ac698fc-f622-443b-898d-9b152fa64288", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0805 16:20:33.442719    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "3ac698fc-f622-443b-898d-9b152fa64288", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/multinode-985000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage,/Users/jenkins/minikube-integration/1937
3-1122/.minikube/machines/multinode-985000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"}
	I0805 16:20:33.442758    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 3ac698fc-f622-443b-898d-9b152fa64288 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/multinode-985000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"
	I0805 16:20:33.442774    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:20:33.445733    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: Pid is 4651
	I0805 16:20:33.446145    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 0
	I0805 16:20:33.446167    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:33.446227    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:33.447073    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:33.447135    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:33.447152    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:33.447186    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:33.447202    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:33.447214    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:33.447222    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:33.447229    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:33.447247    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:33.447269    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:33.447287    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:33.447304    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:33.447321    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:33.453446    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:20:33.506623    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:20:33.507268    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:20:33.507283    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:20:33.507290    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:20:33.507298    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:20:33.891346    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:20:33.891387    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:20:34.006163    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:20:34.006177    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:20:34.006189    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:20:34.006208    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:20:34.007050    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:20:34.007082    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:20:35.448624    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 1
	I0805 16:20:35.448640    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:35.448724    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:35.449516    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:35.449591    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:35.449607    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:35.449619    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:35.449625    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:35.449648    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:35.449664    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:35.449695    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:35.449711    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:35.449719    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:35.449725    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:35.449731    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:35.449738    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:37.449834    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 2
	I0805 16:20:37.449851    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:37.449867    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:37.450676    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:37.450690    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:37.450697    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:37.450707    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:37.450722    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:37.450733    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:37.450744    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:37.450754    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:37.450771    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:37.450784    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:37.450797    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:37.450809    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:37.450819    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:39.451161    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 3
	I0805 16:20:39.451179    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:39.451277    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:39.452025    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:39.452066    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:39.452089    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:39.452104    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:39.452124    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:39.452141    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:39.452154    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:39.452161    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:39.452167    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:39.452183    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:39.452195    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:39.452202    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:39.452211    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:39.592041    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:39 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:20:39.592070    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:39 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:20:39.592076    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:39 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:20:39.615760    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:39 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:20:41.452210    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 4
	I0805 16:20:41.452225    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:41.452325    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:41.453101    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:41.453153    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:41.453162    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:41.453169    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:41.453178    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:41.453187    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:41.453194    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:41.453200    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:41.453219    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:41.453231    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:41.453241    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:41.453250    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:41.453258    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:43.455148    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 5
	I0805 16:20:43.455166    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:43.455244    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:43.456059    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:43.456103    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:20:43.456115    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:20:43.456122    4640 main.go:141] libmachine: (multinode-985000) DBG | Found match: e2:6:14:d2:13:ae
	I0805 16:20:43.456127    4640 main.go:141] libmachine: (multinode-985000) DBG | IP: 192.169.0.13
	I0805 16:20:43.456181    4640 main.go:141] libmachine: (multinode-985000) Calling .GetConfigRaw
	I0805 16:20:43.456781    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:43.456879    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:43.456972    4640 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 16:20:43.456985    4640 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:20:43.457082    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:43.457144    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:43.457907    4640 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 16:20:43.457917    4640 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 16:20:43.457923    4640 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 16:20:43.457927    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:43.458023    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:43.458126    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:43.458255    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:43.458346    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:43.458472    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:43.458676    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:43.458683    4640 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 16:20:44.513424    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:20:44.513443    4640 main.go:141] libmachine: Detecting the provisioner...
	I0805 16:20:44.513452    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:44.513594    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:44.513694    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.513791    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.513876    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:44.513996    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:44.514158    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:44.514165    4640 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 16:20:44.573082    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 16:20:44.573142    4640 main.go:141] libmachine: found compatible host: buildroot
	I0805 16:20:44.573149    4640 main.go:141] libmachine: Provisioning with buildroot...
	I0805 16:20:44.573155    4640 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:20:44.573299    4640 buildroot.go:166] provisioning hostname "multinode-985000"
	I0805 16:20:44.573311    4640 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:20:44.573416    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:44.573499    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:44.573585    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.573680    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.573795    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:44.573922    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:44.574068    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:44.574076    4640 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-985000 && echo "multinode-985000" | sudo tee /etc/hostname
	I0805 16:20:44.637872    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-985000
	
	I0805 16:20:44.637892    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:44.638029    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:44.638132    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.638218    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.638297    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:44.638429    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:44.638562    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:44.638582    4640 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-985000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-985000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-985000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:20:44.698340    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:20:44.698360    4640 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:20:44.698377    4640 buildroot.go:174] setting up certificates
	I0805 16:20:44.698389    4640 provision.go:84] configureAuth start
	I0805 16:20:44.698397    4640 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:20:44.698544    4640 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:20:44.698658    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:44.698750    4640 provision.go:143] copyHostCerts
	I0805 16:20:44.698781    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:20:44.698850    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:20:44.698858    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:20:44.699001    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:20:44.699205    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:20:44.699246    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:20:44.699250    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:20:44.699341    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:20:44.699482    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:20:44.699528    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:20:44.699533    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:20:44.699615    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:20:44.699756    4640 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.multinode-985000 san=[127.0.0.1 192.169.0.13 localhost minikube multinode-985000]
	I0805 16:20:45.028860    4640 provision.go:177] copyRemoteCerts
	I0805 16:20:45.028920    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:20:45.028938    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:45.029080    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:45.029180    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.029338    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:45.029452    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:20:45.063652    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:20:45.063724    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:20:45.083743    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:20:45.083800    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0805 16:20:45.103791    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:20:45.103863    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:20:45.123716    4640 provision.go:87] duration metric: took 425.312704ms to configureAuth
	I0805 16:20:45.123731    4640 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:20:45.123881    4640 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:20:45.123894    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:45.124028    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:45.124115    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:45.124206    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.124285    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.124381    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:45.124503    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:45.124632    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:45.124639    4640 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:20:45.176256    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:20:45.176269    4640 buildroot.go:70] root file system type: tmpfs
	I0805 16:20:45.176337    4640 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:20:45.176350    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:45.176482    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:45.176580    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.176695    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.176782    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:45.176911    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:45.177045    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:45.177090    4640 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:20:45.240992    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:20:45.241023    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:45.241166    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:45.241270    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.241382    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.241469    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:45.241590    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:45.241743    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:45.241755    4640 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:20:46.765402    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:20:46.765418    4640 main.go:141] libmachine: Checking connection to Docker...
	I0805 16:20:46.765424    4640 main.go:141] libmachine: (multinode-985000) Calling .GetURL
	I0805 16:20:46.765563    4640 main.go:141] libmachine: Docker is up and running!
	I0805 16:20:46.765570    4640 main.go:141] libmachine: Reticulating splines...
	I0805 16:20:46.765575    4640 client.go:171] duration metric: took 14.036043683s to LocalClient.Create
	I0805 16:20:46.765592    4640 start.go:167] duration metric: took 14.036090848s to libmachine.API.Create "multinode-985000"
	I0805 16:20:46.765602    4640 start.go:293] postStartSetup for "multinode-985000" (driver="hyperkit")
	I0805 16:20:46.765609    4640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:20:46.765620    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.765765    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:20:46.765778    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:46.765878    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:46.765972    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.766070    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:46.766168    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:20:46.808597    4640 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:20:46.814840    4640 command_runner.go:130] > NAME=Buildroot
	I0805 16:20:46.814852    4640 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0805 16:20:46.814856    4640 command_runner.go:130] > ID=buildroot
	I0805 16:20:46.814869    4640 command_runner.go:130] > VERSION_ID=2023.02.9
	I0805 16:20:46.814873    4640 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0805 16:20:46.814969    4640 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:20:46.814985    4640 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:20:46.815099    4640 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:20:46.815290    4640 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:20:46.815297    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:20:46.815526    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:20:46.832473    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:20:46.852626    4640 start.go:296] duration metric: took 87.015317ms for postStartSetup
	I0805 16:20:46.852653    4640 main.go:141] libmachine: (multinode-985000) Calling .GetConfigRaw
	I0805 16:20:46.853264    4640 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:20:46.853417    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:20:46.853762    4640 start.go:128] duration metric: took 14.156998155s to createHost
	I0805 16:20:46.853776    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:46.853870    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:46.853964    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.854078    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.854160    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:46.854284    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:46.854405    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:46.854413    4640 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 16:20:46.906137    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722900047.071906799
	
	I0805 16:20:46.906149    4640 fix.go:216] guest clock: 1722900047.071906799
	I0805 16:20:46.906154    4640 fix.go:229] Guest: 2024-08-05 16:20:47.071906799 -0700 PDT Remote: 2024-08-05 16:20:46.85377 -0700 PDT m=+14.585721958 (delta=218.136799ms)
	I0805 16:20:46.906178    4640 fix.go:200] guest clock delta is within tolerance: 218.136799ms
	I0805 16:20:46.906182    4640 start.go:83] releasing machines lock for "multinode-985000", held for 14.209573761s
	I0805 16:20:46.906200    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.906321    4640 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:20:46.906429    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.906734    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.906832    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.906917    4640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:20:46.906947    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:46.906977    4640 ssh_runner.go:195] Run: cat /version.json
	I0805 16:20:46.906987    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:46.907036    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:46.907080    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:46.907105    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.907167    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.907190    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:46.907251    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:46.907285    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:20:46.907353    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:20:46.936969    4640 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0805 16:20:46.937263    4640 ssh_runner.go:195] Run: systemctl --version
	I0805 16:20:46.992747    4640 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0805 16:20:46.993626    4640 command_runner.go:130] > systemd 252 (252)
	I0805 16:20:46.993660    4640 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0805 16:20:46.993799    4640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 16:20:46.998949    4640 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0805 16:20:46.998969    4640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:20:46.999002    4640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:20:47.012276    4640 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0805 16:20:47.012544    4640 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:20:47.012556    4640 start.go:495] detecting cgroup driver to use...
	I0805 16:20:47.012657    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:20:47.027593    4640 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0805 16:20:47.027660    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:20:47.035836    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:20:47.044911    4640 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:20:47.044968    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:20:47.053571    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:20:47.061858    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:20:47.070031    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:20:47.078524    4640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:20:47.087870    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:20:47.096303    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:20:47.104482    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:20:47.112756    4640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:20:47.120033    4640 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0805 16:20:47.120127    4640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:20:47.128644    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:47.220387    4640 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:20:47.239567    4640 start.go:495] detecting cgroup driver to use...
	I0805 16:20:47.239642    4640 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:20:47.254939    4640 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0805 16:20:47.255001    4640 command_runner.go:130] > [Unit]
	I0805 16:20:47.255011    4640 command_runner.go:130] > Description=Docker Application Container Engine
	I0805 16:20:47.255015    4640 command_runner.go:130] > Documentation=https://docs.docker.com
	I0805 16:20:47.255020    4640 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0805 16:20:47.255026    4640 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0805 16:20:47.255030    4640 command_runner.go:130] > StartLimitBurst=3
	I0805 16:20:47.255034    4640 command_runner.go:130] > StartLimitIntervalSec=60
	I0805 16:20:47.255037    4640 command_runner.go:130] > [Service]
	I0805 16:20:47.255041    4640 command_runner.go:130] > Type=notify
	I0805 16:20:47.255055    4640 command_runner.go:130] > Restart=on-failure
	I0805 16:20:47.255063    4640 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0805 16:20:47.255073    4640 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0805 16:20:47.255080    4640 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0805 16:20:47.255088    4640 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0805 16:20:47.255094    4640 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0805 16:20:47.255099    4640 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0805 16:20:47.255112    4640 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0805 16:20:47.255120    4640 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0805 16:20:47.255128    4640 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0805 16:20:47.255134    4640 command_runner.go:130] > ExecStart=
	I0805 16:20:47.255164    4640 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0805 16:20:47.255172    4640 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0805 16:20:47.255182    4640 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0805 16:20:47.255189    4640 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0805 16:20:47.255193    4640 command_runner.go:130] > LimitNOFILE=infinity
	I0805 16:20:47.255196    4640 command_runner.go:130] > LimitNPROC=infinity
	I0805 16:20:47.255200    4640 command_runner.go:130] > LimitCORE=infinity
	I0805 16:20:47.255205    4640 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0805 16:20:47.255209    4640 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0805 16:20:47.255212    4640 command_runner.go:130] > TasksMax=infinity
	I0805 16:20:47.255215    4640 command_runner.go:130] > TimeoutStartSec=0
	I0805 16:20:47.255220    4640 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0805 16:20:47.255225    4640 command_runner.go:130] > Delegate=yes
	I0805 16:20:47.255230    4640 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0805 16:20:47.255233    4640 command_runner.go:130] > KillMode=process
	I0805 16:20:47.255236    4640 command_runner.go:130] > [Install]
	I0805 16:20:47.255259    4640 command_runner.go:130] > WantedBy=multi-user.target
	I0805 16:20:47.255324    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:20:47.269909    4640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:20:47.286027    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:20:47.296365    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:20:47.306405    4640 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:20:47.369760    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:20:47.379998    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:20:47.394696    4640 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0805 16:20:47.394951    4640 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:20:47.397850    4640 command_runner.go:130] > /usr/bin/cri-dockerd
	I0805 16:20:47.398038    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:20:47.406063    4640 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:20:47.419537    4640 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:20:47.514227    4640 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:20:47.637079    4640 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:20:47.637156    4640 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:20:47.651314    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:47.748259    4640 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:20:50.076345    4640 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.32806615s)
	I0805 16:20:50.076407    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0805 16:20:50.086580    4640 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0805 16:20:50.099944    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:20:50.110410    4640 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0805 16:20:50.206329    4640 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0805 16:20:50.317239    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:50.417670    4640 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0805 16:20:50.431617    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:20:50.443305    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:50.555307    4640 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0805 16:20:50.610408    4640 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0805 16:20:50.610481    4640 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0805 16:20:50.614751    4640 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0805 16:20:50.614762    4640 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0805 16:20:50.614767    4640 command_runner.go:130] > Device: 0,22	Inode: 806         Links: 1
	I0805 16:20:50.614772    4640 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0805 16:20:50.614775    4640 command_runner.go:130] > Access: 2024-08-05 23:20:50.735793184 +0000
	I0805 16:20:50.614784    4640 command_runner.go:130] > Modify: 2024-08-05 23:20:50.735793184 +0000
	I0805 16:20:50.614789    4640 command_runner.go:130] > Change: 2024-08-05 23:20:50.736793062 +0000
	I0805 16:20:50.614792    4640 command_runner.go:130] >  Birth: -
	I0805 16:20:50.614829    4640 start.go:563] Will wait 60s for crictl version
	I0805 16:20:50.614890    4640 ssh_runner.go:195] Run: which crictl
	I0805 16:20:50.617807    4640 command_runner.go:130] > /usr/bin/crictl
	I0805 16:20:50.617933    4640 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 16:20:50.644026    4640 command_runner.go:130] > Version:  0.1.0
	I0805 16:20:50.644070    4640 command_runner.go:130] > RuntimeName:  docker
	I0805 16:20:50.644117    4640 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0805 16:20:50.644195    4640 command_runner.go:130] > RuntimeApiVersion:  v1
	I0805 16:20:50.645396    4640 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0805 16:20:50.645460    4640 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:20:50.661131    4640 command_runner.go:130] > 27.1.1
	I0805 16:20:50.662194    4640 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:20:50.677860    4640 command_runner.go:130] > 27.1.1
	I0805 16:20:50.700872    4640 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0805 16:20:50.700922    4640 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:20:50.701316    4640 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0805 16:20:50.706154    4640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:20:50.715610    4640 kubeadm.go:883] updating cluster {Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 16:20:50.715677    4640 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:20:50.715736    4640 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:20:50.733572    4640 docker.go:685] Got preloaded images: 
	I0805 16:20:50.733584    4640 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0805 16:20:50.733634    4640 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 16:20:50.741005    4640 command_runner.go:139] > {"Repositories":{}}
	I0805 16:20:50.741090    4640 ssh_runner.go:195] Run: which lz4
	I0805 16:20:50.744527    4640 command_runner.go:130] > /usr/bin/lz4
	I0805 16:20:50.744558    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0805 16:20:50.744692    4640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 16:20:50.747718    4640 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 16:20:50.747836    4640 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 16:20:50.747851    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0805 16:20:51.865752    4640 docker.go:649] duration metric: took 1.121114736s to copy over tarball
	I0805 16:20:51.865833    4640 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 16:20:54.241811    4640 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.375959074s)
	I0805 16:20:54.241825    4640 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 16:20:54.267125    4640 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 16:20:54.275283    4640 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.3":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.3":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.3":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d2
89d99da794784d1"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.3":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0805 16:20:54.275373    4640 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0805 16:20:54.288931    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:54.386395    4640 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:20:56.795159    4640 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.408741228s)
	I0805 16:20:56.795248    4640 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:20:56.808093    4640 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0805 16:20:56.808107    4640 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0805 16:20:56.808111    4640 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0805 16:20:56.808116    4640 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0805 16:20:56.808120    4640 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0805 16:20:56.808123    4640 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0805 16:20:56.808128    4640 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0805 16:20:56.808135    4640 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:20:56.809018    4640 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0805 16:20:56.809035    4640 cache_images.go:84] Images are preloaded, skipping loading
	I0805 16:20:56.809048    4640 kubeadm.go:934] updating node { 192.169.0.13 8443 v1.30.3 docker true true} ...
	I0805 16:20:56.809127    4640 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-985000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 16:20:56.809195    4640 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0805 16:20:56.847007    4640 command_runner.go:130] > cgroupfs
	I0805 16:20:56.847610    4640 cni.go:84] Creating CNI manager for ""
	I0805 16:20:56.847620    4640 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 16:20:56.847630    4640 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 16:20:56.847650    4640 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.13 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-985000 NodeName:multinode-985000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 16:20:56.847744    4640 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-985000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 16:20:56.847807    4640 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 16:20:56.855919    4640 command_runner.go:130] > kubeadm
	I0805 16:20:56.855931    4640 command_runner.go:130] > kubectl
	I0805 16:20:56.855934    4640 command_runner.go:130] > kubelet
	I0805 16:20:56.855959    4640 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 16:20:56.856010    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 16:20:56.863284    4640 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0805 16:20:56.876753    4640 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 16:20:56.890292    4640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0805 16:20:56.904628    4640 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I0805 16:20:56.907711    4640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:20:56.917108    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:57.013172    4640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:20:57.028650    4640 certs.go:68] Setting up /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000 for IP: 192.169.0.13
	I0805 16:20:57.028663    4640 certs.go:194] generating shared ca certs ...
	I0805 16:20:57.028674    4640 certs.go:226] acquiring lock for ca certs: {Name:mkb83e058d89c7d4e66f4136f377a3c305b13735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.028863    4640 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key
	I0805 16:20:57.028935    4640 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key
	I0805 16:20:57.028946    4640 certs.go:256] generating profile certs ...
	I0805 16:20:57.028995    4640 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key
	I0805 16:20:57.029007    4640 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt with IP's: []
	I0805 16:20:57.088127    4640 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt ...
	I0805 16:20:57.088142    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt: {Name:mkb7087fa165ae496621b10df42dfd2f8603360a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.088531    4640 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key ...
	I0805 16:20:57.088540    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key: {Name:mk37e627de9c39a2300d317d721ebf92a202a17e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.088775    4640 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec
	I0805 16:20:57.088790    4640 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt.5b7978ec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.13]
	I0805 16:20:57.189318    4640 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt.5b7978ec ...
	I0805 16:20:57.189336    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt.5b7978ec: {Name:mkb4501af4f6db766eb719de2f42fc564a23d2d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.189653    4640 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec ...
	I0805 16:20:57.189669    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec: {Name:mke641ddecfc5629bb592a5b6321d446ed3b31bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.189903    4640 certs.go:381] copying /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt.5b7978ec -> /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt
	I0805 16:20:57.190140    4640 certs.go:385] copying /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec -> /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key
	I0805 16:20:57.190318    4640 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key
	I0805 16:20:57.190336    4640 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt with IP's: []
	I0805 16:20:57.386717    4640 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt ...
	I0805 16:20:57.386733    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt: {Name:mk486344c8c5b8383e5349f68a995b553e8d31c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.387043    4640 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key ...
	I0805 16:20:57.387052    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key: {Name:mk2b24e1a5e962e12395adf21e4f6ad64901ee0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.387278    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 16:20:57.387306    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 16:20:57.387325    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 16:20:57.387349    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 16:20:57.387368    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 16:20:57.387391    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 16:20:57.387411    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 16:20:57.387432    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 16:20:57.387531    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem (1338 bytes)
	W0805 16:20:57.387583    4640 certs.go:480] ignoring /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678_empty.pem, impossibly tiny 0 bytes
	I0805 16:20:57.387591    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 16:20:57.387621    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem (1082 bytes)
	I0805 16:20:57.387656    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem (1123 bytes)
	I0805 16:20:57.387684    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem (1675 bytes)
	I0805 16:20:57.387747    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:20:57.387781    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem -> /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.387803    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.387822    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.388188    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 16:20:57.408800    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 16:20:57.429927    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 16:20:57.449924    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0805 16:20:57.470736    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0805 16:20:57.490564    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 16:20:57.511342    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 16:20:57.531190    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 16:20:57.551984    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem --> /usr/share/ca-certificates/1678.pem (1338 bytes)
	I0805 16:20:57.571601    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /usr/share/ca-certificates/16782.pem (1708 bytes)
	I0805 16:20:57.592369    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 16:20:57.611866    4640 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 16:20:57.626527    4640 ssh_runner.go:195] Run: openssl version
	I0805 16:20:57.630504    4640 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0805 16:20:57.630711    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1678.pem && ln -fs /usr/share/ca-certificates/1678.pem /etc/ssl/certs/1678.pem"
	I0805 16:20:57.638913    4640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.642115    4640 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  5 22:58 /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.642280    4640 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 22:58 /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.642315    4640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.646345    4640 command_runner.go:130] > 51391683
	I0805 16:20:57.646544    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1678.pem /etc/ssl/certs/51391683.0"
	I0805 16:20:57.654953    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16782.pem && ln -fs /usr/share/ca-certificates/16782.pem /etc/ssl/certs/16782.pem"
	I0805 16:20:57.663842    4640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.667242    4640 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  5 22:58 /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.667258    4640 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 22:58 /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.667300    4640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.671438    4640 command_runner.go:130] > 3ec20f2e
	I0805 16:20:57.671648    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16782.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 16:20:57.679692    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 16:20:57.688061    4640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.691411    4640 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.691493    4640 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.691531    4640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.695572    4640 command_runner.go:130] > b5213941
	I0805 16:20:57.695754    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 16:20:57.704703    4640 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 16:20:57.707752    4640 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 16:20:57.707872    4640 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 16:20:57.707921    4640 kubeadm.go:392] StartCluster: {Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:20:57.708054    4640 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 16:20:57.720408    4640 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 16:20:57.731114    4640 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0805 16:20:57.731128    4640 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0805 16:20:57.731133    4640 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0805 16:20:57.731194    4640 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 16:20:57.739645    4640 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 16:20:57.751095    4640 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0805 16:20:57.751108    4640 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0805 16:20:57.751113    4640 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0805 16:20:57.751120    4640 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 16:20:57.751266    4640 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 16:20:57.751273    4640 kubeadm.go:157] found existing configuration files:
	
	I0805 16:20:57.751324    4640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 16:20:57.759086    4640 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 16:20:57.759185    4640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 16:20:57.759233    4640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 16:20:57.769060    4640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 16:20:57.778103    4640 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 16:20:57.778143    4640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 16:20:57.778190    4640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 16:20:57.786612    4640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 16:20:57.794733    4640 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 16:20:57.794754    4640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 16:20:57.794796    4640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 16:20:57.802671    4640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 16:20:57.810242    4640 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 16:20:57.810264    4640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 16:20:57.810299    4640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 16:20:57.818339    4640 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 16:20:57.890449    4640 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0805 16:20:57.890461    4640 command_runner.go:130] > [init] Using Kubernetes version: v1.30.3
	I0805 16:20:57.890501    4640 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 16:20:57.890507    4640 command_runner.go:130] > [preflight] Running pre-flight checks
	I0805 16:20:57.984851    4640 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 16:20:57.984855    4640 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 16:20:57.984956    4640 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 16:20:57.984962    4640 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 16:20:57.985041    4640 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 16:20:57.985038    4640 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 16:20:58.152965    4640 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 16:20:58.152995    4640 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 16:20:58.175785    4640 out.go:204]   - Generating certificates and keys ...
	I0805 16:20:58.175840    4640 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0805 16:20:58.175851    4640 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 16:20:58.175914    4640 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0805 16:20:58.175920    4640 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 16:20:58.229002    4640 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0805 16:20:58.229016    4640 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0805 16:20:58.322701    4640 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0805 16:20:58.322717    4640 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0805 16:20:58.394063    4640 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0805 16:20:58.394077    4640 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0805 16:20:58.601975    4640 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0805 16:20:58.601995    4640 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0805 16:20:58.821056    4640 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0805 16:20:58.821065    4640 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0805 16:20:58.821204    4640 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-985000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0805 16:20:58.821214    4640 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-985000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0805 16:20:59.150811    4640 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0805 16:20:59.150817    4640 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0805 16:20:59.151036    4640 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-985000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0805 16:20:59.151046    4640 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-985000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0805 16:20:59.206073    4640 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0805 16:20:59.206088    4640 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0805 16:20:59.294956    4640 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0805 16:20:59.294966    4640 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0805 16:20:59.348591    4640 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0805 16:20:59.348602    4640 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0805 16:20:59.348788    4640 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 16:20:59.348797    4640 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 16:20:59.511379    4640 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 16:20:59.511395    4640 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 16:20:59.789652    4640 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 16:20:59.789666    4640 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 16:20:59.965508    4640 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 16:20:59.965517    4640 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 16:21:00.208268    4640 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 16:21:00.208284    4640 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 16:21:00.402575    4640 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 16:21:00.402582    4640 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 16:21:00.409122    4640 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 16:21:00.409137    4640 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 16:21:00.410639    4640 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 16:21:00.410652    4640 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 16:21:00.430944    4640 out.go:204]   - Booting up control plane ...
	I0805 16:21:00.431017    4640 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 16:21:00.431032    4640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 16:21:00.431106    4640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 16:21:00.431106    4640 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 16:21:00.431174    4640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 16:21:00.431182    4640 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 16:21:00.431274    4640 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 16:21:00.431286    4640 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 16:21:00.431361    4640 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 16:21:00.431369    4640 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 16:21:00.431399    4640 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 16:21:00.431405    4640 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0805 16:21:00.540991    4640 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 16:21:00.541004    4640 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 16:21:00.541076    4640 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 16:21:00.541081    4640 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 16:21:01.042556    4640 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.719164ms
	I0805 16:21:01.042573    4640 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.719164ms
	I0805 16:21:01.042632    4640 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 16:21:01.042639    4640 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 16:21:05.541995    4640 kubeadm.go:310] [api-check] The API server is healthy after 4.502407968s
	I0805 16:21:05.542014    4640 command_runner.go:130] > [api-check] The API server is healthy after 4.502407968s
	I0805 16:21:05.551474    4640 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 16:21:05.551486    4640 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 16:21:05.558278    4640 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 16:21:05.558284    4640 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 16:21:05.572116    4640 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 16:21:05.572130    4640 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0805 16:21:05.572281    4640 kubeadm.go:310] [mark-control-plane] Marking the node multinode-985000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 16:21:05.572292    4640 command_runner.go:130] > [mark-control-plane] Marking the node multinode-985000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 16:21:05.579214    4640 kubeadm.go:310] [bootstrap-token] Using token: 0mwls8.ribzsy6ooov2flu0
	I0805 16:21:05.579225    4640 command_runner.go:130] > [bootstrap-token] Using token: 0mwls8.ribzsy6ooov2flu0
	I0805 16:21:05.613851    4640 out.go:204]   - Configuring RBAC rules ...
	I0805 16:21:05.613974    4640 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 16:21:05.613988    4640 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 16:21:05.655317    4640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 16:21:05.655329    4640 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 16:21:05.659733    4640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 16:21:05.659737    4640 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 16:21:05.661608    4640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 16:21:05.661619    4640 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 16:21:05.663605    4640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 16:21:05.663612    4640 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 16:21:05.665771    4640 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 16:21:05.665778    4640 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 16:21:05.947572    4640 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 16:21:05.947585    4640 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 16:21:06.357765    4640 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 16:21:06.357776    4640 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0805 16:21:06.946930    4640 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 16:21:06.946942    4640 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0805 16:21:06.947937    4640 kubeadm.go:310] 
	I0805 16:21:06.947989    4640 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 16:21:06.947996    4640 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0805 16:21:06.948000    4640 kubeadm.go:310] 
	I0805 16:21:06.948071    4640 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 16:21:06.948080    4640 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0805 16:21:06.948088    4640 kubeadm.go:310] 
	I0805 16:21:06.948121    4640 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 16:21:06.948125    4640 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0805 16:21:06.948179    4640 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 16:21:06.948187    4640 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 16:21:06.948229    4640 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 16:21:06.948234    4640 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 16:21:06.948237    4640 kubeadm.go:310] 
	I0805 16:21:06.948284    4640 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 16:21:06.948302    4640 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0805 16:21:06.948309    4640 kubeadm.go:310] 
	I0805 16:21:06.948354    4640 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 16:21:06.948367    4640 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 16:21:06.948375    4640 kubeadm.go:310] 
	I0805 16:21:06.948414    4640 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 16:21:06.948418    4640 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0805 16:21:06.948479    4640 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 16:21:06.948488    4640 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 16:21:06.948558    4640 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 16:21:06.948564    4640 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 16:21:06.948570    4640 kubeadm.go:310] 
	I0805 16:21:06.948633    4640 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 16:21:06.948638    4640 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0805 16:21:06.948701    4640 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 16:21:06.948708    4640 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0805 16:21:06.948715    4640 kubeadm.go:310] 
	I0805 16:21:06.948788    4640 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0mwls8.ribzsy6ooov2flu0 \
	I0805 16:21:06.948795    4640 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 0mwls8.ribzsy6ooov2flu0 \
	I0805 16:21:06.948879    4640 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:524477c6809305b6c0c2d082a15767bdfc04953bf05f4ba28f6a5db30aba8adf \
	I0805 16:21:06.948886    4640 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:524477c6809305b6c0c2d082a15767bdfc04953bf05f4ba28f6a5db30aba8adf \
	I0805 16:21:06.948905    4640 kubeadm.go:310] 	--control-plane 
	I0805 16:21:06.948911    4640 command_runner.go:130] > 	--control-plane 
	I0805 16:21:06.948916    4640 kubeadm.go:310] 
	I0805 16:21:06.948980    4640 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 16:21:06.948984    4640 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0805 16:21:06.948987    4640 kubeadm.go:310] 
	I0805 16:21:06.949052    4640 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0mwls8.ribzsy6ooov2flu0 \
	I0805 16:21:06.949057    4640 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 0mwls8.ribzsy6ooov2flu0 \
	I0805 16:21:06.949136    4640 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:524477c6809305b6c0c2d082a15767bdfc04953bf05f4ba28f6a5db30aba8adf 
	I0805 16:21:06.949141    4640 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:524477c6809305b6c0c2d082a15767bdfc04953bf05f4ba28f6a5db30aba8adf 
	I0805 16:21:06.949613    4640 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 16:21:06.949621    4640 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 16:21:06.949644    4640 cni.go:84] Creating CNI manager for ""
	I0805 16:21:06.949649    4640 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 16:21:06.972147    4640 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0805 16:21:07.030449    4640 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0805 16:21:07.036220    4640 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0805 16:21:07.036233    4640 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0805 16:21:07.036239    4640 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0805 16:21:07.036249    4640 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0805 16:21:07.036254    4640 command_runner.go:130] > Access: 2024-08-05 23:20:43.694299549 +0000
	I0805 16:21:07.036259    4640 command_runner.go:130] > Modify: 2024-07-29 16:10:03.000000000 +0000
	I0805 16:21:07.036264    4640 command_runner.go:130] > Change: 2024-08-05 23:20:41.058596444 +0000
	I0805 16:21:07.036266    4640 command_runner.go:130] >  Birth: -
	I0805 16:21:07.036368    4640 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0805 16:21:07.036375    4640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0805 16:21:07.050414    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0805 16:21:07.243070    4640 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0805 16:21:07.246445    4640 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0805 16:21:07.250670    4640 command_runner.go:130] > serviceaccount/kindnet created
	I0805 16:21:07.255971    4640 command_runner.go:130] > daemonset.apps/kindnet created
	I0805 16:21:07.257424    4640 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 16:21:07.257500    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-985000 minikube.k8s.io/updated_at=2024_08_05T16_21_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4 minikube.k8s.io/name=multinode-985000 minikube.k8s.io/primary=true
	I0805 16:21:07.257502    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:07.266956    4640 command_runner.go:130] > -16
	I0805 16:21:07.267023    4640 ops.go:34] apiserver oom_adj: -16
	I0805 16:21:07.390396    4640 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0805 16:21:07.392070    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:07.400579    4640 command_runner.go:130] > node/multinode-985000 labeled
	I0805 16:21:07.456213    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:07.893323    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:07.956622    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:08.392391    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:08.450793    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:08.892411    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:08.950456    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:09.393238    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:09.450291    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:09.892156    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:09.951159    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:10.393019    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:10.451734    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:10.893100    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:10.954360    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:11.393009    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:11.452879    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:11.894187    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:11.953480    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:12.392194    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:12.452444    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:12.894265    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:12.955367    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:13.392882    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:13.455680    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:13.892568    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:13.950195    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:14.393254    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:14.452940    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:14.892187    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:14.948447    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:15.392762    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:15.451815    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:15.892531    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:15.952781    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:16.393008    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:16.454659    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:16.892423    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:16.957989    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:17.392489    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:17.452653    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:17.892453    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:17.953809    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:18.392692    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:18.450726    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:18.893940    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:18.957266    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:19.393402    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:19.452345    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:19.892761    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:19.952524    4640 command_runner.go:130] > NAME      SECRETS   AGE
	I0805 16:21:19.952537    4640 command_runner.go:130] > default   0         1s
	I0805 16:21:19.952551    4640 kubeadm.go:1113] duration metric: took 12.695106906s to wait for elevateKubeSystemPrivileges
	I0805 16:21:19.952568    4640 kubeadm.go:394] duration metric: took 22.244643678s to StartCluster
	I0805 16:21:19.952584    4640 settings.go:142] acquiring lock: {Name:mk564a817a54ecf2aef16a4d2309e85208c0231f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:21:19.952678    4640 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:21:19.953130    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/kubeconfig: {Name:mk2a0d8b4d330b3c26432fc65d015ddf98a9cc93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:21:19.953387    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0805 16:21:19.953391    4640 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:21:19.953437    4640 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 16:21:19.953474    4640 addons.go:69] Setting storage-provisioner=true in profile "multinode-985000"
	I0805 16:21:19.953501    4640 addons.go:234] Setting addon storage-provisioner=true in "multinode-985000"
	I0805 16:21:19.953507    4640 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:21:19.953501    4640 addons.go:69] Setting default-storageclass=true in profile "multinode-985000"
	I0805 16:21:19.953520    4640 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:21:19.953542    4640 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-985000"
	I0805 16:21:19.953772    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.953787    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.953870    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.953897    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.962985    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52500
	I0805 16:21:19.963341    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52502
	I0805 16:21:19.963365    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.963645    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.963722    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.963735    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.963997    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.964004    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.964027    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.964249    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.964372    4640 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:21:19.964430    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.964458    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:19.964465    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.964535    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:21:19.966651    4640 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:21:19.966874    4640 kapi.go:59] client config for multinode-985000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xed05060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:21:19.967275    4640 cert_rotation.go:137] Starting client certificate rotation controller
	I0805 16:21:19.967411    4640 addons.go:234] Setting addon default-storageclass=true in "multinode-985000"
	I0805 16:21:19.967434    4640 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:21:19.967665    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.967688    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.973226    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52504
	I0805 16:21:19.973568    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.973922    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.973942    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.974163    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.974282    4640 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:21:19.974363    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:19.974444    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:21:19.975405    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:21:19.975491    4640 out.go:177] * Verifying Kubernetes components...
	I0805 16:21:19.976182    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52506
	I0805 16:21:19.976461    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.976795    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.976812    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.976999    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.977392    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.977409    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.986027    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52508
	I0805 16:21:19.986361    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.986712    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.986741    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.986959    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.987071    4640 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:21:19.987149    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:19.987227    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:21:19.988179    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:21:19.988299    4640 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 16:21:19.988307    4640 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 16:21:19.988315    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:21:19.988395    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:21:19.988484    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:21:19.988568    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:21:19.988639    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:21:20.032241    4640 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:21:20.032361    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:21:20.069496    4640 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 16:21:20.069510    4640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 16:21:20.069530    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:21:20.069717    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:21:20.069824    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:21:20.069935    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:21:20.070041    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:21:20.084762    4640 command_runner.go:130] > apiVersion: v1
	I0805 16:21:20.084775    4640 command_runner.go:130] > data:
	I0805 16:21:20.084779    4640 command_runner.go:130] >   Corefile: |
	I0805 16:21:20.084782    4640 command_runner.go:130] >     .:53 {
	I0805 16:21:20.084785    4640 command_runner.go:130] >         errors
	I0805 16:21:20.084790    4640 command_runner.go:130] >         health {
	I0805 16:21:20.084794    4640 command_runner.go:130] >            lameduck 5s
	I0805 16:21:20.084796    4640 command_runner.go:130] >         }
	I0805 16:21:20.084812    4640 command_runner.go:130] >         ready
	I0805 16:21:20.084822    4640 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0805 16:21:20.084829    4640 command_runner.go:130] >            pods insecure
	I0805 16:21:20.084833    4640 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0805 16:21:20.084841    4640 command_runner.go:130] >            ttl 30
	I0805 16:21:20.084853    4640 command_runner.go:130] >         }
	I0805 16:21:20.084863    4640 command_runner.go:130] >         prometheus :9153
	I0805 16:21:20.084868    4640 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0805 16:21:20.084880    4640 command_runner.go:130] >            max_concurrent 1000
	I0805 16:21:20.084884    4640 command_runner.go:130] >         }
	I0805 16:21:20.084887    4640 command_runner.go:130] >         cache 30
	I0805 16:21:20.084898    4640 command_runner.go:130] >         loop
	I0805 16:21:20.084902    4640 command_runner.go:130] >         reload
	I0805 16:21:20.084905    4640 command_runner.go:130] >         loadbalance
	I0805 16:21:20.084908    4640 command_runner.go:130] >     }
	I0805 16:21:20.084911    4640 command_runner.go:130] > kind: ConfigMap
	I0805 16:21:20.084914    4640 command_runner.go:130] > metadata:
	I0805 16:21:20.084921    4640 command_runner.go:130] >   creationTimestamp: "2024-08-05T23:21:06Z"
	I0805 16:21:20.084926    4640 command_runner.go:130] >   name: coredns
	I0805 16:21:20.084929    4640 command_runner.go:130] >   namespace: kube-system
	I0805 16:21:20.084933    4640 command_runner.go:130] >   resourceVersion: "266"
	I0805 16:21:20.084937    4640 command_runner.go:130] >   uid: 5057af03-8824-4e67-a4b6-ef90c1ded7ce
	I0805 16:21:20.085056    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0805 16:21:20.184335    4640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:21:20.203408    4640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 16:21:20.278639    4640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 16:21:20.507141    4640 command_runner.go:130] > configmap/coredns replaced
	I0805 16:21:20.511660    4640 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0805 16:21:20.511929    4640 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:21:20.511932    4640 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:21:20.512124    4640 kapi.go:59] client config for multinode-985000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xed05060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:21:20.512125    4640 kapi.go:59] client config for multinode-985000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xed05060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:21:20.512341    4640 node_ready.go:35] waiting up to 6m0s for node "multinode-985000" to be "Ready" ...
	I0805 16:21:20.512409    4640 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0805 16:21:20.512416    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.512423    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.512424    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:20.512428    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.512430    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.512438    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.512446    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.520076    4640 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0805 16:21:20.520087    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.520092    4640 round_trippers.go:580]     Audit-Id: 304f14c4-a466-4fb6-b401-b28f4df4dfa1
	I0805 16:21:20.520095    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.520103    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.520107    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.520111    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.520113    4640 round_trippers.go:580]     Content-Length: 291
	I0805 16:21:20.520117    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.521443    4640 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0805 16:21:20.521456    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.521464    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.521474    4640 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7bdcac2f-ecae-4bb5-9dd4-4f2479d63a63","resourceVersion":"381","creationTimestamp":"2024-08-05T23:21:06Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0805 16:21:20.521479    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.521487    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.521502    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.521511    4640 round_trippers.go:580]     Audit-Id: bcd9e393-6b08-4ffb-a73b-6e7c430f0212
	I0805 16:21:20.521518    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.521831    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:20.521865    4640 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7bdcac2f-ecae-4bb5-9dd4-4f2479d63a63","resourceVersion":"381","creationTimestamp":"2024-08-05T23:21:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0805 16:21:20.521904    4640 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0805 16:21:20.521914    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.521921    4640 round_trippers.go:473]     Content-Type: application/json
	I0805 16:21:20.521930    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.521935    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.530726    4640 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0805 16:21:20.530739    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.530744    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.530748    4640 round_trippers.go:580]     Content-Length: 291
	I0805 16:21:20.530751    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.530754    4640 round_trippers.go:580]     Audit-Id: ba15a3b2-b69b-473e-a331-81e01385ad47
	I0805 16:21:20.530756    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.530758    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.530761    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.530773    4640 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7bdcac2f-ecae-4bb5-9dd4-4f2479d63a63","resourceVersion":"383","creationTimestamp":"2024-08-05T23:21:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0805 16:21:20.588534    4640 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0805 16:21:20.588563    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.588570    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.588737    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.588752    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.588765    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.588764    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.588772    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.588919    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.588920    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.588931    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.589012    4640 round_trippers.go:463] GET https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses
	I0805 16:21:20.589020    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.589028    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.589034    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.597496    4640 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0805 16:21:20.597508    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.597513    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.597518    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.597521    4640 round_trippers.go:580]     Content-Length: 1273
	I0805 16:21:20.597523    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.597525    4640 round_trippers.go:580]     Audit-Id: d7394cfc-1eb3-4623-8a7f-a5088a0398c8
	I0805 16:21:20.597527    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.597530    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.597844    4640 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"391"},"items":[{"metadata":{"name":"standard","uid":"34b9c98b-1b12-420a-8576-fd00c496f57b","resourceVersion":"387","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0805 16:21:20.598117    4640 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"34b9c98b-1b12-420a-8576-fd00c496f57b","resourceVersion":"387","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0805 16:21:20.598145    4640 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0805 16:21:20.598150    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.598157    4640 round_trippers.go:473]     Content-Type: application/json
	I0805 16:21:20.598166    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.598171    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.619819    4640 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0805 16:21:20.619836    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.619842    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.619846    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.619849    4640 round_trippers.go:580]     Content-Length: 1220
	I0805 16:21:20.619852    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.619855    4640 round_trippers.go:580]     Audit-Id: 299d4cc8-0cb5-4dd5-80b3-5d54592ecd90
	I0805 16:21:20.619859    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.619861    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.619898    4640 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"34b9c98b-1b12-420a-8576-fd00c496f57b","resourceVersion":"387","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0805 16:21:20.619983    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.619992    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.620141    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.620153    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.620166    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.750372    4640 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0805 16:21:20.753871    4640 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0805 16:21:20.759257    4640 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0805 16:21:20.767575    4640 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0805 16:21:20.774745    4640 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0805 16:21:20.786454    4640 command_runner.go:130] > pod/storage-provisioner created
	I0805 16:21:20.787838    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.787851    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.788087    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.788087    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.788098    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.788109    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.788117    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.788261    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.788280    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.788280    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.811467    4640 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0805 16:21:20.871433    4640 addons.go:510] duration metric: took 917.995637ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0805 16:21:21.014507    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:21.014532    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:21.014545    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:21.014553    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:21.014605    4640 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0805 16:21:21.014619    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:21.014631    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:21.014638    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:21.017465    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:21.017464    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:21.017480    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:21.017492    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:21.017492    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:21.017496    4640 round_trippers.go:580]     Content-Length: 291
	I0805 16:21:21.017502    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:21 GMT
	I0805 16:21:21.017504    4640 round_trippers.go:580]     Audit-Id: fb264fed-80ee-469b-a34e-7b1e8460f94b
	I0805 16:21:21.017506    4640 round_trippers.go:580]     Audit-Id: c9362211-8dfc-4385-87db-76c6486df53e
	I0805 16:21:21.017512    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:21.017513    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:21.017518    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:21.017519    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:21.017522    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:21.017524    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:21.017529    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:21.017545    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:21 GMT
	I0805 16:21:21.017616    4640 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7bdcac2f-ecae-4bb5-9dd4-4f2479d63a63","resourceVersion":"395","creationTimestamp":"2024-08-05T23:21:06Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0805 16:21:21.017684    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:21.017735    4640 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-985000" context rescaled to 1 replicas
	I0805 16:21:21.514170    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:21.514200    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:21.514219    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:21.514226    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:21.516804    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:21.516819    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:21.516826    4640 round_trippers.go:580]     Audit-Id: 9396255c-231d-48cb-a53f-22663307b969
	I0805 16:21:21.516830    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:21.516834    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:21.516839    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:21.516849    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:21.516854    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:21 GMT
	I0805 16:21:21.516951    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:22.013275    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:22.013299    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:22.013311    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:22.013319    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:22.016138    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:22.016155    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:22.016163    4640 round_trippers.go:580]     Audit-Id: cc869aef-9ab4-4a7f-8835-cce2afa76dd9
	I0805 16:21:22.016168    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:22.016175    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:22.016182    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:22.016187    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:22.016193    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:22 GMT
	I0805 16:21:22.016497    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:22.512546    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:22.512561    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:22.512567    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:22.512572    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:22.515381    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:22.515393    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:22.515401    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:22.515407    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:22.515412    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:22.515416    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:22 GMT
	I0805 16:21:22.515420    4640 round_trippers.go:580]     Audit-Id: e7d470a0-7df5-4d85-9bb5-cbf15cfa989f
	I0805 16:21:22.515423    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:22.515634    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:22.515838    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:23.012594    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:23.012606    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:23.012612    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:23.012616    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:23.014085    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:23.014095    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:23.014101    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:23.014104    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:23.014107    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:23.014109    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:23.014113    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:23 GMT
	I0805 16:21:23.014116    4640 round_trippers.go:580]     Audit-Id: e12d5034-3bd9-498b-844e-12133805ded9
	I0805 16:21:23.014306    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:23.513150    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:23.513163    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:23.513168    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:23.513172    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:23.514595    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:23.514604    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:23.514610    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:23.514614    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:23.514617    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:23.514619    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:23.514622    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:23 GMT
	I0805 16:21:23.514635    4640 round_trippers.go:580]     Audit-Id: 2bc52e3b-1575-453f-87fa-51f4301a9426
	I0805 16:21:23.514871    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:24.012814    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:24.012826    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:24.012832    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:24.012835    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:24.014366    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:24.014379    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:24.014384    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:24.014388    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:24.014406    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:24.014411    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:24.014414    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:24 GMT
	I0805 16:21:24.014417    4640 round_trippers.go:580]     Audit-Id: f14d8611-e5e1-45fe-92f3-95559148c71b
	I0805 16:21:24.014572    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:24.513607    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:24.513620    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:24.513626    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:24.513629    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:24.515210    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:24.515220    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:24.515242    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:24.515253    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:24.515260    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:24.515264    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:24.515268    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:24 GMT
	I0805 16:21:24.515271    4640 round_trippers.go:580]     Audit-Id: 0a897d84-d437-4212-b36d-e414fedf55d4
	I0805 16:21:24.515427    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:25.013253    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:25.013272    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:25.013283    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:25.013321    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:25.015275    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:25.015308    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:25.015317    4640 round_trippers.go:580]     Audit-Id: ced7b45c-a072-4322-89ab-d0cc21ddfb1d
	I0805 16:21:25.015322    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:25.015325    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:25.015328    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:25.015332    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:25.015336    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:25 GMT
	I0805 16:21:25.015627    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:25.015849    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:25.512881    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:25.512902    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:25.512914    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:25.512920    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:25.515502    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:25.515517    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:25.515524    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:25.515529    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:25.515534    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:25.515538    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:25.515542    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:25 GMT
	I0805 16:21:25.515545    4640 round_trippers.go:580]     Audit-Id: dd6b59c1-dde3-4d67-b446-8823ad717d4f
	I0805 16:21:25.515665    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:26.013787    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:26.013811    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:26.013824    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:26.013830    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:26.016420    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:26.016440    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:26.016463    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:26 GMT
	I0805 16:21:26.016470    4640 round_trippers.go:580]     Audit-Id: 19939705-2879-44e6-830c-0c86394087ed
	I0805 16:21:26.016473    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:26.016485    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:26.016490    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:26.016494    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:26.016965    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:26.512523    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:26.512536    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:26.512541    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:26.512544    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:26.514158    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:26.514167    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:26.514172    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:26.514176    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:26.514179    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:26.514182    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:26 GMT
	I0805 16:21:26.514184    4640 round_trippers.go:580]     Audit-Id: f2346665-2701-41e1-94b0-41a70aa2f170
	I0805 16:21:26.514187    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:26.514489    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:27.013107    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:27.013136    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:27.013148    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:27.013155    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:27.015615    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:27.015632    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:27.015639    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:27 GMT
	I0805 16:21:27.015655    4640 round_trippers.go:580]     Audit-Id: 6abee22d-c1db-48e9-99db-e07791ed571f
	I0805 16:21:27.015661    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:27.015664    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:27.015667    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:27.015672    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:27.015747    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:27.015996    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:27.513549    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:27.513570    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:27.513582    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:27.513589    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:27.516173    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:27.516189    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:27.516197    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:27.516200    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:27.516204    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:27.516209    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:27 GMT
	I0805 16:21:27.516212    4640 round_trippers.go:580]     Audit-Id: a227585b-ae23-4bd1-b1dc-643eadd970cc
	I0805 16:21:27.516215    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:27.516416    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:28.014104    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:28.014132    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:28.014143    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:28.014159    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:28.016690    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:28.016705    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:28.016713    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:28.016717    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:28.016721    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:28.016725    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:28.016728    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:28 GMT
	I0805 16:21:28.016731    4640 round_trippers.go:580]     Audit-Id: 0d14831c-cc1f-41a9-a252-85e191b9594d
	I0805 16:21:28.016834    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:28.512703    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:28.512726    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:28.512739    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:28.512747    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:28.515176    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:28.515190    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:28.515197    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:28 GMT
	I0805 16:21:28.515201    4640 round_trippers.go:580]     Audit-Id: 6af459f8-bb08-43bf-ac7f-51ccacd5d664
	I0805 16:21:28.515206    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:28.515211    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:28.515215    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:28.515219    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:28.515378    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:29.013324    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:29.013354    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:29.013360    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:29.013364    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:29.014793    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:29.014804    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:29.014809    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:29.014813    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:29 GMT
	I0805 16:21:29.014817    4640 round_trippers.go:580]     Audit-Id: 2e50ff34-0c55-4136-b537-eee73f73706d
	I0805 16:21:29.014819    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:29.014822    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:29.014826    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:29.015098    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:29.513802    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:29.513832    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:29.513844    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:29.513852    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:29.516479    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:29.516496    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:29.516504    4640 round_trippers.go:580]     Audit-Id: bcbc3920-26b4-45f4-b91a-ce0e3dc11770
	I0805 16:21:29.516529    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:29.516538    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:29.516544    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:29.516549    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:29.516554    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:29 GMT
	I0805 16:21:29.516682    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:29.516938    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:30.013325    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:30.013349    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:30.013436    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:30.013448    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:30.016209    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:30.016222    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:30.016228    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:30.016233    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:30 GMT
	I0805 16:21:30.016238    4640 round_trippers.go:580]     Audit-Id: fb0bd3e0-89c3-4c77-a27d-be315cab22b7
	I0805 16:21:30.016242    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:30.016277    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:30.016283    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:30.016477    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:30.514344    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:30.514386    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:30.514482    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:30.514494    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:30.518828    4640 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 16:21:30.518860    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:30.518870    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:30.518876    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:30.518882    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:30 GMT
	I0805 16:21:30.518888    4640 round_trippers.go:580]     Audit-Id: c1b08932-ee78-4dcb-a190-3a8b24421284
	I0805 16:21:30.518894    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:30.518899    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:30.519002    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:31.012673    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:31.012701    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:31.012712    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:31.012718    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:31.015543    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:31.015560    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:31.015568    4640 round_trippers.go:580]     Audit-Id: b6586a64-ec07-44ee-8a00-1f3b8a00e0bd
	I0805 16:21:31.015572    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:31.015576    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:31.015580    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:31.015583    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:31.015589    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:31 GMT
	I0805 16:21:31.015682    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:31.512531    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:31.512543    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:31.512550    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:31.512554    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:31.514066    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:31.514076    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:31.514081    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:31 GMT
	I0805 16:21:31.514085    4640 round_trippers.go:580]     Audit-Id: 7d410de7-b0d5-4d4e-8455-d31b0df7d302
	I0805 16:21:31.514089    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:31.514093    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:31.514096    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:31.514107    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:31.514758    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:32.014110    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:32.014136    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:32.014147    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:32.014157    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:32.016553    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:32.016570    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:32.016580    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:32.016586    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:32.016592    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:32.016598    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:32.016602    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:32 GMT
	I0805 16:21:32.016605    4640 round_trippers.go:580]     Audit-Id: 67fdb64b-273a-46c2-aac5-c3b115422aa4
	I0805 16:21:32.016861    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:32.017132    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:32.513171    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:32.513188    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:32.513195    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:32.513198    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:32.514908    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:32.514920    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:32.514925    4640 round_trippers.go:580]     Audit-Id: 0f5a2e98-6be6-4963-8897-91c70642048c
	I0805 16:21:32.514928    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:32.514931    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:32.514933    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:32.514936    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:32.514939    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:32 GMT
	I0805 16:21:32.515082    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:33.013769    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:33.013803    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:33.013814    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:33.013822    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:33.016491    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:33.016509    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:33.016519    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:33 GMT
	I0805 16:21:33.016526    4640 round_trippers.go:580]     Audit-Id: 96b5f269-7be9-42a9-9687-cba57d05f76e
	I0805 16:21:33.016532    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:33.016538    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:33.016543    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:33.016548    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:33.016715    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:33.512751    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:33.512772    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:33.512783    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:33.512789    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:33.515431    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:33.515480    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:33.515498    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:33.515506    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:33.515510    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:33 GMT
	I0805 16:21:33.515513    4640 round_trippers.go:580]     Audit-Id: 6cd252a3-d07d-441e-bcf4-bc3bd00c2488
	I0805 16:21:33.515517    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:33.515520    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:33.515747    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:34.013003    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:34.013032    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:34.013043    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:34.013052    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:34.015447    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:34.015465    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:34.015472    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:34.015476    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:34.015479    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:34.015484    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:34.015487    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:34 GMT
	I0805 16:21:34.015492    4640 round_trippers.go:580]     Audit-Id: efcfb0d1-8345-4db5-bce9-e31085842da3
	I0805 16:21:34.015599    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:34.513298    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:34.513317    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:34.513376    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:34.513383    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:34.515051    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:34.515065    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:34.515072    4640 round_trippers.go:580]     Audit-Id: 2a42cb6a-0051-47bd-85f4-9f8ca80afa70
	I0805 16:21:34.515078    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:34.515081    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:34.515087    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:34.515099    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:34.515103    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:34 GMT
	I0805 16:21:34.515359    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:34.515540    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:35.013932    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:35.013957    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:35.013968    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:35.013976    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:35.016505    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:35.016524    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:35.016530    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:35.016537    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:35.016541    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:35.016544    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:35.016555    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:35 GMT
	I0805 16:21:35.016559    4640 round_trippers.go:580]     Audit-Id: 09fa0e04-c026-439e-9cd7-392fd82b16fe
	I0805 16:21:35.016913    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:35.513491    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:35.513514    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:35.513526    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:35.513532    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:35.515995    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:35.516012    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:35.516020    4640 round_trippers.go:580]     Audit-Id: a2b05a8a-9a91-4d20-93d0-b8701ac59b95
	I0805 16:21:35.516024    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:35.516036    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:35.516041    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:35.516055    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:35.516060    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:35 GMT
	I0805 16:21:35.516151    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:36.013521    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:36.013549    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.013561    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.013566    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.016095    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:36.016112    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.016119    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.016131    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.016136    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.016140    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.016144    4640 round_trippers.go:580]     Audit-Id: 77e04f39-a037-4ea2-9716-ad04139089d1
	I0805 16:21:36.016147    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.016230    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"423","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0805 16:21:36.016465    4640 node_ready.go:49] node "multinode-985000" has status "Ready":"True"
	I0805 16:21:36.016481    4640 node_ready.go:38] duration metric: took 15.504115701s for node "multinode-985000" to be "Ready" ...
	I0805 16:21:36.016489    4640 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:21:36.016543    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:36.016551    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.016559    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.016563    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.019046    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:36.019057    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.019065    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.019069    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.019078    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.019081    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.019084    4640 round_trippers.go:580]     Audit-Id: 96048303-6e62-4ba8-a291-bc1ad976756e
	I0805 16:21:36.019091    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.019721    4640 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"427","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56289 chars]
	I0805 16:21:36.021921    4640 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:36.021960    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:21:36.021964    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.021970    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.021974    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.023179    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:36.023187    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.023192    4640 round_trippers.go:580]     Audit-Id: ba42f387-f106-4773-86de-3a22085fd86a
	I0805 16:21:36.023195    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.023198    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.023200    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.023204    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.023208    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.023410    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"427","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0805 16:21:36.023652    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:36.023659    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.023665    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.023671    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.024732    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:36.024744    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.024752    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.024758    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.024765    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.024768    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.024771    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.024775    4640 round_trippers.go:580]     Audit-Id: 2008721c-b230-4e73-b037-d3a843d7c7c8
	I0805 16:21:36.024909    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"423","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0805 16:21:36.523495    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:21:36.523508    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.523514    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.523519    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.525003    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:36.525014    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.525020    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.525042    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.525049    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.525053    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.525060    4640 round_trippers.go:580]     Audit-Id: 1ad5a8dd-64b3-4881-9a8e-e5eaab368c53
	I0805 16:21:36.525066    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.525202    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"427","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0805 16:21:36.525483    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:36.525490    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.525498    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.525502    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.526801    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:36.526810    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.526814    4640 round_trippers.go:580]     Audit-Id: 71c4017f-a267-489e-86ed-59098eae3b88
	I0805 16:21:36.526817    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.526834    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.526840    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.526846    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.526850    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.527025    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"423","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0805 16:21:37.022759    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:21:37.022781    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.022791    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.022799    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.025487    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:37.025503    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.025510    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.025515    4640 round_trippers.go:580]     Audit-Id: 7446d9fd-22ed-4d20-b0f2-e8c4a88b04f4
	I0805 16:21:37.025536    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.025543    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.025547    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.025556    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.025649    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"427","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0805 16:21:37.026010    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.026020    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.026028    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.026033    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.027337    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:37.027346    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.027354    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.027359    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.027363    4640 round_trippers.go:580]     Audit-Id: a309eed4-f088-47f7-8b84-4761b59dbb8c
	I0805 16:21:37.027366    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.027368    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.027371    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.027425    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.522283    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:21:37.522304    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.522315    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.522322    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.524762    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:37.524776    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.524782    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.524788    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.524792    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.524795    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.524799    4640 round_trippers.go:580]     Audit-Id: eaef42a8-7b43-4091-9b70-8d31adc979e5
	I0805 16:21:37.524803    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.525073    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"443","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6576 chars]
	I0805 16:21:37.525438    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.525480    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.525488    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.525492    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.526890    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:37.526903    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.526912    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.526918    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.526927    4640 round_trippers.go:580]     Audit-Id: a3a0e71a-c982-4504-9fae-e76101688c05
	I0805 16:21:37.526931    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.526935    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.526937    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.527034    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.527211    4640 pod_ready.go:92] pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.527220    4640 pod_ready.go:81] duration metric: took 1.505289062s for pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.527230    4640 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.527259    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-985000
	I0805 16:21:37.527264    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.527269    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.527277    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.528379    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:37.528389    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.528394    4640 round_trippers.go:580]     Audit-Id: 3cf4f372-47fb-4b72-9b30-185d93d01537
	I0805 16:21:37.528401    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.528405    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.528408    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.528411    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.528414    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.528618    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-985000","namespace":"kube-system","uid":"8d7ca2d9-8c7b-41b9-a199-de6449107471","resourceVersion":"379","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.mirror":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.seen":"2024-08-05T23:21:06.366030299Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0805 16:21:37.528833    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.528840    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.528845    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.528850    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.529802    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.529808    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.529813    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.529816    4640 round_trippers.go:580]     Audit-Id: 314df0bd-894e-4607-bad0-3348c18fe807
	I0805 16:21:37.529820    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.529823    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.529826    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.529833    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.530046    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.530203    4640 pod_ready.go:92] pod "etcd-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.530210    4640 pod_ready.go:81] duration metric: took 2.974841ms for pod "etcd-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.530218    4640 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.530249    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-985000
	I0805 16:21:37.530253    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.530259    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.530262    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.531449    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:37.531456    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.531461    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.531463    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.531467    4640 round_trippers.go:580]     Audit-Id: 1801a8f0-22d5-44e8-942c-ea521b1ffa66
	I0805 16:21:37.531469    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.531475    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.531477    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.531592    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-985000","namespace":"kube-system","uid":"9be3378a-5fab-4907-baad-507918e714e4","resourceVersion":"369","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"5908531d711118eab279d6b15448dc42","kubernetes.io/config.mirror":"5908531d711118eab279d6b15448dc42","kubernetes.io/config.seen":"2024-08-05T23:21:06.366030949Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7684 chars]
	I0805 16:21:37.531810    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.531820    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.531825    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.531830    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.532663    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.532668    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.532672    4640 round_trippers.go:580]     Audit-Id: 6d0fc4ed-c609-4ee7-a57f-b61eed1bc442
	I0805 16:21:37.532675    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.532679    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.532682    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.532684    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.532688    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.532807    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.532958    4640 pod_ready.go:92] pod "kube-apiserver-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.532967    4640 pod_ready.go:81] duration metric: took 2.743443ms for pod "kube-apiserver-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.532973    4640 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.533000    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-985000
	I0805 16:21:37.533004    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.533009    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.533012    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.533987    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.533995    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.534000    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.534004    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.534020    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.534027    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.534031    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.534034    4640 round_trippers.go:580]     Audit-Id: 97e4dc5c-f4bf-419e-8b15-be800418054c
	I0805 16:21:37.534147    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-985000","namespace":"kube-system","uid":"4ad64361-65de-4b0b-b2a3-07df18c2e603","resourceVersion":"342","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e41fb21b40cd2f3bd83b000891f6569","kubernetes.io/config.mirror":"8e41fb21b40cd2f3bd83b000891f6569","kubernetes.io/config.seen":"2024-08-05T23:21:06.366027130Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7259 chars]
	I0805 16:21:37.534370    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.534377    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.534383    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.534386    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.535293    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.535301    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.535305    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.535308    4640 round_trippers.go:580]     Audit-Id: a4c04a0a-9401-41d1-a0fc-f2a2187abde4
	I0805 16:21:37.535310    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.535313    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.535320    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.535323    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.535432    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.535591    4640 pod_ready.go:92] pod "kube-controller-manager-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.535599    4640 pod_ready.go:81] duration metric: took 2.621545ms for pod "kube-controller-manager-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.535606    4640 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fwgw7" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.535629    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwgw7
	I0805 16:21:37.535634    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.535639    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.535643    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.536550    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.536557    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.536565    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.536570    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.536575    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.536578    4640 round_trippers.go:580]     Audit-Id: 5a688e80-7db3-4070-a1a8-c3419ddb4d44
	I0805 16:21:37.536580    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.536582    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.536704    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fwgw7","generateName":"kube-proxy-","namespace":"kube-system","uid":"3fb72e39-699d-4123-ae5e-e314a191d904","resourceVersion":"409","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8b6258e6-7b31-4600-b32b-4a269867c123","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8b6258e6-7b31-4600-b32b-4a269867c123\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0805 16:21:37.614745    4640 request.go:629] Waited for 77.807971ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.614815    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.614822    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.614839    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.614845    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.616956    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:37.616984    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.616989    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.616993    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.616996    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.616999    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.617002    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.617005    4640 round_trippers.go:580]     Audit-Id: e297627c-4c52-417b-935c-d406bf086c16
	I0805 16:21:37.617232    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.617428    4640 pod_ready.go:92] pod "kube-proxy-fwgw7" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.617437    4640 pod_ready.go:81] duration metric: took 81.82693ms for pod "kube-proxy-fwgw7" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.617444    4640 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.815296    4640 request.go:629] Waited for 197.761592ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-985000
	I0805 16:21:37.815347    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-985000
	I0805 16:21:37.815355    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.815366    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.815376    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.817961    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:37.817976    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.818001    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.818008    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:37.818049    4640 round_trippers.go:580]     Audit-Id: cc44c4e8-8012-4718-aa24-c05fec399a2e
	I0805 16:21:37.818064    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.818078    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.818082    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.818186    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-985000","namespace":"kube-system","uid":"5e23b1b7-e45d-4b43-831c-aa835c5e536d","resourceVersion":"396","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d110ae14602908970c81c0d8a5c21147","kubernetes.io/config.mirror":"d110ae14602908970c81c0d8a5c21147","kubernetes.io/config.seen":"2024-08-05T23:21:06.366029633Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0805 16:21:38.014472    4640 request.go:629] Waited for 195.947535ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:38.014569    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:38.014578    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.014589    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.014597    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.017395    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:38.017406    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.017413    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.017418    4640 round_trippers.go:580]     Audit-Id: 925efcbc-f43b-4431-905e-26927bb76a48
	I0805 16:21:38.017422    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.017428    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.017434    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.017441    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.017905    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:38.018153    4640 pod_ready.go:92] pod "kube-scheduler-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:38.018164    4640 pod_ready.go:81] duration metric: took 400.713995ms for pod "kube-scheduler-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:38.018173    4640 pod_ready.go:38] duration metric: took 2.001673669s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:21:38.018198    4640 api_server.go:52] waiting for apiserver process to appear ...
	I0805 16:21:38.018268    4640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:21:38.030133    4640 command_runner.go:130] > 1977
	I0805 16:21:38.030360    4640 api_server.go:72] duration metric: took 18.07694495s to wait for apiserver process to appear ...
	I0805 16:21:38.030369    4640 api_server.go:88] waiting for apiserver healthz status ...
	I0805 16:21:38.030384    4640 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:21:38.034009    4640 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0805 16:21:38.034048    4640 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I0805 16:21:38.034052    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.034058    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.034063    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.034646    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:38.034653    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.034658    4640 round_trippers.go:580]     Audit-Id: 9f5c9766-330c-4bb5-a5de-4c3a0fdbe474
	I0805 16:21:38.034662    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.034665    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.034668    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.034670    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.034673    4640 round_trippers.go:580]     Content-Length: 263
	I0805 16:21:38.034676    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.034687    4640 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0805 16:21:38.034733    4640 api_server.go:141] control plane version: v1.30.3
	I0805 16:21:38.034742    4640 api_server.go:131] duration metric: took 4.369143ms to wait for apiserver health ...
	I0805 16:21:38.034747    4640 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 16:21:38.213812    4640 request.go:629] Waited for 178.999213ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:38.213950    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:38.213960    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.213970    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.213980    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.217309    4640 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:21:38.217324    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.217331    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.217336    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.217363    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.217372    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.217377    4640 round_trippers.go:580]     Audit-Id: 0f21513f-44e7-4d2f-bacd-2a12fceef757
	I0805 16:21:38.217381    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.217979    4640 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"443","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0805 16:21:38.219249    4640 system_pods.go:59] 8 kube-system pods found
	I0805 16:21:38.219261    4640 system_pods.go:61] "coredns-7db6d8ff4d-fqtll" [4d8af129-475b-4185-8b0d-cbda67812964] Running
	I0805 16:21:38.219265    4640 system_pods.go:61] "etcd-multinode-985000" [8d7ca2d9-8c7b-41b9-a199-de6449107471] Running
	I0805 16:21:38.219268    4640 system_pods.go:61] "kindnet-tvtvg" [7dd4afe7-2a17-4298-823b-9955e43cfdb2] Running
	I0805 16:21:38.219271    4640 system_pods.go:61] "kube-apiserver-multinode-985000" [9be3378a-5fab-4907-baad-507918e714e4] Running
	I0805 16:21:38.219276    4640 system_pods.go:61] "kube-controller-manager-multinode-985000" [4ad64361-65de-4b0b-b2a3-07df18c2e603] Running
	I0805 16:21:38.219278    4640 system_pods.go:61] "kube-proxy-fwgw7" [3fb72e39-699d-4123-ae5e-e314a191d904] Running
	I0805 16:21:38.219280    4640 system_pods.go:61] "kube-scheduler-multinode-985000" [5e23b1b7-e45d-4b43-831c-aa835c5e536d] Running
	I0805 16:21:38.219283    4640 system_pods.go:61] "storage-provisioner" [72ec8458-5c62-43eb-9120-0146e6ccaf8f] Running
	I0805 16:21:38.219286    4640 system_pods.go:74] duration metric: took 184.535842ms to wait for pod list to return data ...
	I0805 16:21:38.219291    4640 default_sa.go:34] waiting for default service account to be created ...
	I0805 16:21:38.413643    4640 request.go:629] Waited for 194.308242ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:21:38.413680    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:21:38.413687    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.413695    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.413699    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.415522    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:38.415531    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.415536    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.415539    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.415543    4640 round_trippers.go:580]     Content-Length: 261
	I0805 16:21:38.415546    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.415548    4640 round_trippers.go:580]     Audit-Id: efc85c0c-9bbc-4cb7-8c14-19ba2f873800
	I0805 16:21:38.415551    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.415553    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.415563    4640 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b0626468-f73b-4e9b-8270-658495d43f4a","resourceVersion":"337","creationTimestamp":"2024-08-05T23:21:19Z"}}]}
	I0805 16:21:38.415681    4640 default_sa.go:45] found service account: "default"
	I0805 16:21:38.415690    4640 default_sa.go:55] duration metric: took 196.394719ms for default service account to be created ...
	I0805 16:21:38.415697    4640 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 16:21:38.613742    4640 request.go:629] Waited for 198.012461ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:38.613858    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:38.613864    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.613870    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.613874    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.616077    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:38.616090    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.616099    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.616106    4640 round_trippers.go:580]     Audit-Id: 3f8a6f23-788b-41c4-8dee-6ff59c02c21d
	I0805 16:21:38.616112    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.616116    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.616126    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.616143    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.616489    4640 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"443","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0805 16:21:38.617747    4640 system_pods.go:86] 8 kube-system pods found
	I0805 16:21:38.617761    4640 system_pods.go:89] "coredns-7db6d8ff4d-fqtll" [4d8af129-475b-4185-8b0d-cbda67812964] Running
	I0805 16:21:38.617766    4640 system_pods.go:89] "etcd-multinode-985000" [8d7ca2d9-8c7b-41b9-a199-de6449107471] Running
	I0805 16:21:38.617770    4640 system_pods.go:89] "kindnet-tvtvg" [7dd4afe7-2a17-4298-823b-9955e43cfdb2] Running
	I0805 16:21:38.617773    4640 system_pods.go:89] "kube-apiserver-multinode-985000" [9be3378a-5fab-4907-baad-507918e714e4] Running
	I0805 16:21:38.617776    4640 system_pods.go:89] "kube-controller-manager-multinode-985000" [4ad64361-65de-4b0b-b2a3-07df18c2e603] Running
	I0805 16:21:38.617780    4640 system_pods.go:89] "kube-proxy-fwgw7" [3fb72e39-699d-4123-ae5e-e314a191d904] Running
	I0805 16:21:38.617784    4640 system_pods.go:89] "kube-scheduler-multinode-985000" [5e23b1b7-e45d-4b43-831c-aa835c5e536d] Running
	I0805 16:21:38.617787    4640 system_pods.go:89] "storage-provisioner" [72ec8458-5c62-43eb-9120-0146e6ccaf8f] Running
	I0805 16:21:38.617792    4640 system_pods.go:126] duration metric: took 202.090644ms to wait for k8s-apps to be running ...
	I0805 16:21:38.617801    4640 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 16:21:38.617848    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:21:38.629448    4640 system_svc.go:56] duration metric: took 11.643357ms WaitForService to wait for kubelet
	I0805 16:21:38.629463    4640 kubeadm.go:582] duration metric: took 18.676048708s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:21:38.629475    4640 node_conditions.go:102] verifying NodePressure condition ...
	I0805 16:21:38.814057    4640 request.go:629] Waited for 184.539621ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I0805 16:21:38.814182    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0805 16:21:38.814193    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.814205    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.814213    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.817076    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:38.817092    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.817099    4640 round_trippers.go:580]     Audit-Id: 83bb2c88-8ae3-45b7-a0f6-9d3f9fead5f2
	I0805 16:21:38.817103    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.817112    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.817116    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.817123    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.817128    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:39 GMT
	I0805 16:21:38.817200    4640 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5011 chars]
	I0805 16:21:38.817474    4640 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:21:38.817490    4640 node_conditions.go:123] node cpu capacity is 2
	I0805 16:21:38.817502    4640 node_conditions.go:105] duration metric: took 188.023135ms to run NodePressure ...
	I0805 16:21:38.817512    4640 start.go:241] waiting for startup goroutines ...
	I0805 16:21:38.817520    4640 start.go:246] waiting for cluster config update ...
	I0805 16:21:38.817530    4640 start.go:255] writing updated cluster config ...
	I0805 16:21:38.838343    4640 out.go:177] 
	I0805 16:21:38.859405    4640 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:21:38.859465    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:21:38.881260    4640 out.go:177] * Starting "multinode-985000-m02" worker node in "multinode-985000" cluster
	I0805 16:21:38.923226    4640 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:21:38.923254    4640 cache.go:56] Caching tarball of preloaded images
	I0805 16:21:38.923425    4640 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:21:38.923439    4640 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:21:38.923503    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:21:38.924257    4640 start.go:360] acquireMachinesLock for multinode-985000-m02: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:21:38.924355    4640 start.go:364] duration metric: took 78.775µs to acquireMachinesLock for "multinode-985000-m02"
	I0805 16:21:38.924379    4640 start.go:93] Provisioning new machine with config: &{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0805 16:21:38.924443    4640 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0805 16:21:38.946258    4640 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 16:21:38.946431    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:38.946482    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:38.956315    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52515
	I0805 16:21:38.956651    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:38.957008    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:38.957028    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:38.957245    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:38.957408    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:21:38.957527    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:38.957642    4640 start.go:159] libmachine.API.Create for "multinode-985000" (driver="hyperkit")
	I0805 16:21:38.957663    4640 client.go:168] LocalClient.Create starting
	I0805 16:21:38.957697    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem
	I0805 16:21:38.957735    4640 main.go:141] libmachine: Decoding PEM data...
	I0805 16:21:38.957747    4640 main.go:141] libmachine: Parsing certificate...
	I0805 16:21:38.957790    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem
	I0805 16:21:38.957819    4640 main.go:141] libmachine: Decoding PEM data...
	I0805 16:21:38.957833    4640 main.go:141] libmachine: Parsing certificate...
	I0805 16:21:38.957849    4640 main.go:141] libmachine: Running pre-create checks...
	I0805 16:21:38.957855    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .PreCreateCheck
	I0805 16:21:38.957933    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:38.957959    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetConfigRaw
	I0805 16:21:38.967700    4640 main.go:141] libmachine: Creating machine...
	I0805 16:21:38.967725    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .Create
	I0805 16:21:38.967957    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:38.968233    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | I0805 16:21:38.967940    4677 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:21:38.968338    4640 main.go:141] libmachine: (multinode-985000-m02) Downloading /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1122/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 16:21:39.171726    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | I0805 16:21:39.171650    4677 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa...
	I0805 16:21:39.251408    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | I0805 16:21:39.251327    4677 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/multinode-985000-m02.rawdisk...
	I0805 16:21:39.251421    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Writing magic tar header
	I0805 16:21:39.251439    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Writing SSH key tar header
	I0805 16:21:39.252021    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | I0805 16:21:39.251983    4677 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02 ...
	I0805 16:21:39.622286    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:39.622309    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid
	I0805 16:21:39.622382    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Using UUID ab5b9c9f-9e28-4bc2-8fcd-b98fce011173
	I0805 16:21:39.647304    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Generated MAC a6:1c:88:9c:44:3
	I0805 16:21:39.647324    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000
	I0805 16:21:39.647363    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0805 16:21:39.647396    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0805 16:21:39.647440    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/multinode-985000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage,/Users/j
enkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"}
	I0805 16:21:39.647475    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ab5b9c9f-9e28-4bc2-8fcd-b98fce011173 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/multinode-985000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/mult
inode-985000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"
	I0805 16:21:39.647493    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:21:39.650407    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: Pid is 4678
	I0805 16:21:39.650823    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 0
	I0805 16:21:39.650838    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:39.650909    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:39.651807    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:39.651870    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:39.651899    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:39.651984    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:39.652006    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:39.652022    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:39.652032    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:39.652039    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:39.652046    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:39.652082    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:39.652100    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:39.652113    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:39.652123    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:39.652143    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:39.657903    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:21:39.666018    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:21:39.666937    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:21:39.666963    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:21:39.666975    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:21:39.666990    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:21:40.050205    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:21:40.050221    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:21:40.165006    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:21:40.165028    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:21:40.165042    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:21:40.165049    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:21:40.165899    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:21:40.165911    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:21:41.653048    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 1
	I0805 16:21:41.653066    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:41.653144    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:41.653911    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:41.653968    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:41.653979    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:41.653992    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:41.653998    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:41.654006    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:41.654015    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:41.654030    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:41.654045    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:41.654053    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:41.654061    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:41.654070    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:41.654078    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:41.654093    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:43.655366    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 2
	I0805 16:21:43.655382    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:43.655471    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:43.656243    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:43.656291    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:43.656301    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:43.656319    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:43.656329    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:43.656351    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:43.656362    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:43.656369    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:43.656375    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:43.656391    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:43.656406    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:43.656416    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:43.656423    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:43.656437    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:45.657345    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 3
	I0805 16:21:45.657361    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:45.657459    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:45.658214    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:45.658269    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:45.658278    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:45.658286    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:45.658295    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:45.658310    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:45.658321    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:45.658329    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:45.658337    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:45.658349    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:45.658362    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:45.658370    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:45.658378    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:45.658387    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:45.751756    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:45 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:21:45.751812    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:45 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:21:45.751830    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:45 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:21:45.774801    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:45 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:21:47.659182    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 4
	I0805 16:21:47.659208    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:47.659291    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:47.660062    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:47.660112    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:47.660128    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:47.660137    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:47.660145    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:47.660153    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:47.660162    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:47.660178    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:47.660192    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:47.660204    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:47.660218    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:47.660230    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:47.660240    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:47.660260    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:49.662115    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 5
	I0805 16:21:49.662148    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:49.662310    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:49.663748    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:49.663812    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I0805 16:21:49.663831    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b00c}
	I0805 16:21:49.663846    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found match: a6:1c:88:9c:44:3
	I0805 16:21:49.663856    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | IP: 192.169.0.14
	I0805 16:21:49.663945    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetConfigRaw
	I0805 16:21:49.664855    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:49.665006    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:49.665127    4640 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 16:21:49.665139    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetState
	I0805 16:21:49.665271    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:49.665344    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:49.666326    4640 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 16:21:49.666337    4640 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 16:21:49.666342    4640 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 16:21:49.666348    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.666471    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:49.666603    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.666743    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.666869    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:49.667045    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:49.667279    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:49.667287    4640 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 16:21:49.724369    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:21:49.724382    4640 main.go:141] libmachine: Detecting the provisioner...
	I0805 16:21:49.724388    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.724522    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:49.724626    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.724719    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.724810    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:49.724938    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:49.725087    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:49.725094    4640 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 16:21:49.782403    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 16:21:49.782454    4640 main.go:141] libmachine: found compatible host: buildroot
	I0805 16:21:49.782460    4640 main.go:141] libmachine: Provisioning with buildroot...
	I0805 16:21:49.782466    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:21:49.782595    4640 buildroot.go:166] provisioning hostname "multinode-985000-m02"
	I0805 16:21:49.782606    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:21:49.782698    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.782797    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:49.782871    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.782964    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.783079    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:49.783204    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:49.783350    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:49.783359    4640 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-985000-m02 && echo "multinode-985000-m02" | sudo tee /etc/hostname
	I0805 16:21:49.854175    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-985000-m02
	
	I0805 16:21:49.854190    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.854319    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:49.854421    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.854492    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.854587    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:49.854712    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:49.854870    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:49.854882    4640 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-985000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-985000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-985000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:21:49.917814    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:21:49.917830    4640 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:21:49.917840    4640 buildroot.go:174] setting up certificates
	I0805 16:21:49.917846    4640 provision.go:84] configureAuth start
	I0805 16:21:49.917856    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:21:49.917985    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:21:49.918095    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.918192    4640 provision.go:143] copyHostCerts
	I0805 16:21:49.918223    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:21:49.918280    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:21:49.918285    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:21:49.918411    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:21:49.918617    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:21:49.918652    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:21:49.918658    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:21:49.918733    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:21:49.918888    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:21:49.918922    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:21:49.918927    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:21:49.918994    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:21:49.919145    4640 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.multinode-985000-m02 san=[127.0.0.1 192.169.0.14 localhost minikube multinode-985000-m02]
	I0805 16:21:50.072896    4640 provision.go:177] copyRemoteCerts
	I0805 16:21:50.072947    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:21:50.072962    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:50.073107    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:50.073199    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.073317    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:50.073426    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:21:50.108446    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:21:50.108519    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:21:50.128617    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:21:50.128684    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0805 16:21:50.148653    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:21:50.148720    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:21:50.168682    4640 provision.go:87] duration metric: took 250.828344ms to configureAuth
	I0805 16:21:50.168695    4640 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:21:50.168835    4640 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:21:50.168849    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:50.168993    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:50.169087    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:50.169175    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.169262    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.169345    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:50.169486    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:50.169621    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:50.169628    4640 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:21:50.228062    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:21:50.228074    4640 buildroot.go:70] root file system type: tmpfs
	I0805 16:21:50.228150    4640 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:21:50.228164    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:50.228293    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:50.228388    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.228480    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.228586    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:50.228755    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:50.228888    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:50.228934    4640 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.13"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:21:50.296901    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.13
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:21:50.296919    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:50.297064    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:50.297158    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.297250    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.297333    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:50.297475    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:50.297611    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:50.297624    4640 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:21:51.873922    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:21:51.873940    4640 main.go:141] libmachine: Checking connection to Docker...
	I0805 16:21:51.873964    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetURL
	I0805 16:21:51.874107    4640 main.go:141] libmachine: Docker is up and running!
	I0805 16:21:51.874115    4640 main.go:141] libmachine: Reticulating splines...
	I0805 16:21:51.874120    4640 client.go:171] duration metric: took 12.916447572s to LocalClient.Create
	I0805 16:21:51.874129    4640 start.go:167] duration metric: took 12.916485141s to libmachine.API.Create "multinode-985000"
	I0805 16:21:51.874135    4640 start.go:293] postStartSetup for "multinode-985000-m02" (driver="hyperkit")
	I0805 16:21:51.874142    4640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:21:51.874152    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:51.874292    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:21:51.874313    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:51.874416    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:51.874505    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:51.874583    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:51.874657    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:21:51.915394    4640 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:21:51.919538    4640 command_runner.go:130] > NAME=Buildroot
	I0805 16:21:51.919549    4640 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0805 16:21:51.919553    4640 command_runner.go:130] > ID=buildroot
	I0805 16:21:51.919557    4640 command_runner.go:130] > VERSION_ID=2023.02.9
	I0805 16:21:51.919560    4640 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0805 16:21:51.919635    4640 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:21:51.919645    4640 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:21:51.919746    4640 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:21:51.919897    4640 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:21:51.919903    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:21:51.920070    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:21:51.929531    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:21:51.959146    4640 start.go:296] duration metric: took 85.003807ms for postStartSetup
	I0805 16:21:51.959174    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetConfigRaw
	I0805 16:21:51.959830    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:21:51.959996    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:21:51.960355    4640 start.go:128] duration metric: took 13.03589336s to createHost
	I0805 16:21:51.960370    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:51.960461    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:51.960532    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:51.960607    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:51.960679    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:51.960792    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:51.960921    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:51.960928    4640 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 16:21:52.018527    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722900112.019707412
	
	I0805 16:21:52.018539    4640 fix.go:216] guest clock: 1722900112.019707412
	I0805 16:21:52.018544    4640 fix.go:229] Guest: 2024-08-05 16:21:52.019707412 -0700 PDT Remote: 2024-08-05 16:21:51.960363 -0700 PDT m=+79.692294773 (delta=59.344412ms)
	I0805 16:21:52.018555    4640 fix.go:200] guest clock delta is within tolerance: 59.344412ms
	I0805 16:21:52.018561    4640 start.go:83] releasing machines lock for "multinode-985000-m02", held for 13.094193048s
	I0805 16:21:52.018577    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:52.018703    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:21:52.040117    4640 out.go:177] * Found network options:
	I0805 16:21:52.084887    4640 out.go:177]   - NO_PROXY=192.169.0.13
	W0805 16:21:52.106885    4640 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:21:52.106945    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:52.107811    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:52.108153    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:52.108320    4640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:21:52.108371    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	W0805 16:21:52.108412    4640 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:21:52.108519    4640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 16:21:52.108545    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:52.108628    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:52.108772    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:52.108842    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:52.108951    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:52.109026    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:52.109176    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:52.109197    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:21:52.109323    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:21:52.141829    4640 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0805 16:21:52.141939    4640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:21:52.141993    4640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:21:52.191903    4640 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0805 16:21:52.192466    4640 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0805 16:21:52.192507    4640 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:21:52.192514    4640 start.go:495] detecting cgroup driver to use...
	I0805 16:21:52.192581    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:21:52.208225    4640 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0805 16:21:52.208528    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:21:52.217078    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:21:52.225489    4640 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:21:52.225534    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:21:52.233992    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:21:52.242465    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:21:52.250835    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:21:52.260065    4640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:21:52.268863    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:21:52.277242    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:21:52.285501    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:21:52.293845    4640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:21:52.301185    4640 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0805 16:21:52.301319    4640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:21:52.308881    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:21:52.403323    4640 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:21:52.423722    4640 start.go:495] detecting cgroup driver to use...
	I0805 16:21:52.423794    4640 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:21:52.442557    4640 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0805 16:21:52.443108    4640 command_runner.go:130] > [Unit]
	I0805 16:21:52.443119    4640 command_runner.go:130] > Description=Docker Application Container Engine
	I0805 16:21:52.443124    4640 command_runner.go:130] > Documentation=https://docs.docker.com
	I0805 16:21:52.443128    4640 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0805 16:21:52.443132    4640 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0805 16:21:52.443136    4640 command_runner.go:130] > StartLimitBurst=3
	I0805 16:21:52.443141    4640 command_runner.go:130] > StartLimitIntervalSec=60
	I0805 16:21:52.443147    4640 command_runner.go:130] > [Service]
	I0805 16:21:52.443151    4640 command_runner.go:130] > Type=notify
	I0805 16:21:52.443155    4640 command_runner.go:130] > Restart=on-failure
	I0805 16:21:52.443160    4640 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13
	I0805 16:21:52.443165    4640 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0805 16:21:52.443175    4640 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0805 16:21:52.443182    4640 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0805 16:21:52.443188    4640 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0805 16:21:52.443194    4640 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0805 16:21:52.443200    4640 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0805 16:21:52.443212    4640 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0805 16:21:52.443224    4640 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0805 16:21:52.443231    4640 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0805 16:21:52.443234    4640 command_runner.go:130] > ExecStart=
	I0805 16:21:52.443246    4640 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0805 16:21:52.443250    4640 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0805 16:21:52.443256    4640 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0805 16:21:52.443262    4640 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0805 16:21:52.443265    4640 command_runner.go:130] > LimitNOFILE=infinity
	I0805 16:21:52.443269    4640 command_runner.go:130] > LimitNPROC=infinity
	I0805 16:21:52.443272    4640 command_runner.go:130] > LimitCORE=infinity
	I0805 16:21:52.443277    4640 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0805 16:21:52.443282    4640 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0805 16:21:52.443285    4640 command_runner.go:130] > TasksMax=infinity
	I0805 16:21:52.443290    4640 command_runner.go:130] > TimeoutStartSec=0
	I0805 16:21:52.443296    4640 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0805 16:21:52.443299    4640 command_runner.go:130] > Delegate=yes
	I0805 16:21:52.443304    4640 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0805 16:21:52.443313    4640 command_runner.go:130] > KillMode=process
	I0805 16:21:52.443317    4640 command_runner.go:130] > [Install]
	I0805 16:21:52.443321    4640 command_runner.go:130] > WantedBy=multi-user.target
	I0805 16:21:52.443454    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:21:52.455112    4640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:21:52.472976    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:21:52.485648    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:21:52.496640    4640 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:21:52.520742    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:21:52.532843    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:21:52.547391    4640 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0805 16:21:52.547619    4640 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:21:52.550475    4640 command_runner.go:130] > /usr/bin/cri-dockerd
	I0805 16:21:52.550551    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:21:52.558821    4640 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:21:52.572801    4640 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:21:52.669948    4640 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:21:52.772017    4640 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:21:52.772038    4640 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:21:52.785587    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:21:52.887001    4640 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:22:53.782764    4640 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0805 16:22:53.782779    4640 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0805 16:22:53.782788    4640 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.895755367s)
	I0805 16:22:53.782849    4640 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0805 16:22:53.791796    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0805 16:22:53.791808    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578059613Z" level=info msg="Starting up"
	I0805 16:22:53.791820    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578746899Z" level=info msg="containerd not running, starting managed containerd"
	I0805 16:22:53.791833    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.579364099Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	I0805 16:22:53.791843    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.597194743Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0805 16:22:53.791853    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613422882Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0805 16:22:53.791865    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613448264Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0805 16:22:53.791875    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613527396Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0805 16:22:53.791884    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613540484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791897    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613598776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.791906    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613664323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791924    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613844698Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.791936    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613881896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791948    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613894727Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.791957    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613902000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791967    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614005875Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791976    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614259691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791991    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615867073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.792000    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615974584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.792024    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616138996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.792033    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616172823Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0805 16:22:53.792042    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616291383Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0805 16:22:53.792050    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616398312Z" level=info msg="metadata content store policy set" policy=shared
	I0805 16:22:53.792059    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.618998610Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0805 16:22:53.792068    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619065338Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0805 16:22:53.792076    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619081703Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0805 16:22:53.792085    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619092273Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0805 16:22:53.792094    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619101426Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0805 16:22:53.792103    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619164798Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0805 16:22:53.792113    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619370752Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0805 16:22:53.792121    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619460644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0805 16:22:53.792129    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619495461Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0805 16:22:53.792138    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619506581Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0805 16:22:53.792148    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619515758Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792158    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619524383Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792170    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619532546Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792178    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619541391Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792187    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619550990Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792197    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619565508Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792266    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619576616Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792278    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619584035Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792291    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619598072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792299    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619608190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792307    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619616319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792316    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619625389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792326    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619634123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792335    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619648148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792344    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619658942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792353    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619667668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792362    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619676302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792371    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619686416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792380    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619694011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792388    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619701566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792397    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619709342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792406    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619719250Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0805 16:22:53.792415    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619733203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792423    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619741785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792432    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619749153Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0805 16:22:53.792442    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619797467Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0805 16:22:53.792454    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619811479Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0805 16:22:53.792467    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619819137Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0805 16:22:53.792661    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619826861Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0805 16:22:53.792673    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619833500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792682    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619841896Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0805 16:22:53.792690    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619852419Z" level=info msg="NRI interface is disabled by configuration."
	I0805 16:22:53.792702    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620071162Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0805 16:22:53.792710    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620124755Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0805 16:22:53.792718    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620155079Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0805 16:22:53.792725    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620168148Z" level=info msg="containerd successfully booted in 0.023750s"
	I0805 16:22:53.792734    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.639692405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0805 16:22:53.792741    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.644102102Z" level=info msg="Loading containers: start."
	I0805 16:22:53.792763    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.740540264Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0805 16:22:53.792774    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.826229634Z" level=info msg="Loading containers: done."
	I0805 16:22:53.792783    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843276878Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	I0805 16:22:53.792792    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843375843Z" level=info msg="Daemon has completed initialization"
	I0805 16:22:53.792800    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869275976Z" level=info msg="API listen on /var/run/docker.sock"
	I0805 16:22:53.792807    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869434474Z" level=info msg="API listen on [::]:2376"
	I0805 16:22:53.792813    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 systemd[1]: Started Docker Application Container Engine.
	I0805 16:22:53.792821    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.919662359Z" level=info msg="Processing signal 'terminated'"
	I0805 16:22:53.792829    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920773928Z" level=info msg="Daemon shutdown complete"
	I0805 16:22:53.792840    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920792538Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0805 16:22:53.792852    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920845272Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0805 16:22:53.792861    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920858866Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0805 16:22:53.792868    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0805 16:22:53.792874    4640 command_runner.go:130] > Aug 05 23:21:53 multinode-985000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0805 16:22:53.792904    4640 command_runner.go:130] > Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0805 16:22:53.792911    4640 command_runner.go:130] > Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0805 16:22:53.792918    4640 command_runner.go:130] > Aug 05 23:21:53 multinode-985000-m02 dockerd[923]: time="2024-08-05T23:21:53.957339969Z" level=info msg="Starting up"
	I0805 16:22:53.792929    4640 command_runner.go:130] > Aug 05 23:22:53 multinode-985000-m02 dockerd[923]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0805 16:22:53.792940    4640 command_runner.go:130] > Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0805 16:22:53.792946    4640 command_runner.go:130] > Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0805 16:22:53.792952    4640 command_runner.go:130] > Aug 05 23:22:53 multinode-985000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0805 16:22:53.817223    4640 out.go:177] 
	W0805 16:22:53.838182    4640 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 05 23:21:50 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578059613Z" level=info msg="Starting up"
	Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578746899Z" level=info msg="containerd not running, starting managed containerd"
	Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.579364099Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.597194743Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613422882Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613448264Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613527396Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613540484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613598776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613664323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613844698Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613881896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613894727Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613902000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614005875Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614259691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615867073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615974584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616138996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616172823Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616291383Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616398312Z" level=info msg="metadata content store policy set" policy=shared
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.618998610Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619065338Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619081703Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619092273Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619101426Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619164798Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619370752Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619460644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619495461Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619506581Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619515758Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619524383Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619532546Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619541391Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619550990Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619565508Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619576616Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619584035Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619598072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619608190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619616319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619625389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619634123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619648148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619658942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619667668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619676302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619686416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619694011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619701566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619709342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619719250Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619733203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619741785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619749153Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619797467Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619811479Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619819137Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619826861Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619833500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619841896Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619852419Z" level=info msg="NRI interface is disabled by configuration."
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620071162Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620124755Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620155079Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620168148Z" level=info msg="containerd successfully booted in 0.023750s"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.639692405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.644102102Z" level=info msg="Loading containers: start."
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.740540264Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.826229634Z" level=info msg="Loading containers: done."
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843276878Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843375843Z" level=info msg="Daemon has completed initialization"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869275976Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869434474Z" level=info msg="API listen on [::]:2376"
	Aug 05 23:21:51 multinode-985000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.919662359Z" level=info msg="Processing signal 'terminated'"
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920773928Z" level=info msg="Daemon shutdown complete"
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920792538Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920845272Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920858866Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 05 23:21:52 multinode-985000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 05 23:21:53 multinode-985000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:21:53 multinode-985000-m02 dockerd[923]: time="2024-08-05T23:21:53.957339969Z" level=info msg="Starting up"
	Aug 05 23:22:53 multinode-985000-m02 dockerd[923]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 05 23:22:53 multinode-985000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0805 16:22:53.838301    4640 out.go:239] * 
	W0805 16:22:53.839537    4640 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:22:53.901092    4640 out.go:177] 
	
	
	==> Docker <==
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.538240622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.545949341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.546006859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.546094356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.546213245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 cri-dockerd[1167]: time="2024-08-05T23:21:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2a8cd74365e92f179bb6ee1ce28c9364c192d2bf64c54e8b18c5339cfbdf5dcd/resolv.conf as [nameserver 192.169.0.1]"
	Aug 05 23:21:36 multinode-985000 cri-dockerd[1167]: time="2024-08-05T23:21:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/35b9ac42edc06af57c697463456d60a00f8d9d12849ef967af1e639bc238e3b3/resolv.conf as [nameserver 192.169.0.1]"
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.715025205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.715620680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.716022138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.717088853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.755323726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.755409641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.755418837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.764703174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:22:57 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:57.493861515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:22:57 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:57.493963422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:22:57 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:57.494329548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:22:57 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:57.494770138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:22:57 multinode-985000 cri-dockerd[1167]: time="2024-08-05T23:22:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/abfb33d4f204dd0b2a7ffc533336cce5539144674b64125ac7373b0be8961559/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 05 23:22:58 multinode-985000 cri-dockerd[1167]: time="2024-08-05T23:22:58Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Aug 05 23:22:58 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:58.841390849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:22:58 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:58.841491056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:22:58 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:58.841532145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:22:58 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:58.841640743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0cbc162071e51       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   abfb33d4f204d       busybox-fc5497c4f-44k5g
	c9365aec33892       cbb01a7bd410d                                                                                         13 minutes ago      Running             coredns                   0                   35b9ac42edc06       coredns-7db6d8ff4d-fqtll
	3d9fd612d0b14       6e38f40d628db                                                                                         13 minutes ago      Running             storage-provisioner       0                   2a8cd74365e92       storage-provisioner
	724e5cfab0a27       kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3              13 minutes ago      Running             kindnet-cni               0                   65a1122097f07       kindnet-tvtvg
	d58ca48f9f8b2       55bb025d2cfa5                                                                                         13 minutes ago      Running             kube-proxy                0                   c91338eb0e138       kube-proxy-fwgw7
	792feba1a6f6b       3edc18e7b7672                                                                                         14 minutes ago      Running             kube-scheduler            0                   c86e04eb7823b       kube-scheduler-multinode-985000
	1fdd85b796ab3       3861cfcd7c04c                                                                                         14 minutes ago      Running             etcd                      0                   b58900db52990       etcd-multinode-985000
	d11865076c645       76932a3b37d7e                                                                                         14 minutes ago      Running             kube-controller-manager   0                   55a20063845e3       kube-controller-manager-multinode-985000
	608878b33f358       1f6d574d502f3                                                                                         14 minutes ago      Running             kube-apiserver            0                   569788c2699f1       kube-apiserver-multinode-985000
	
	
	==> coredns [c9365aec3389] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57821 - 19682 "HINFO IN 7732396596932693360.4385804994640298901. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.014623104s
	[INFO] 10.244.0.3:44234 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136193s
	[INFO] 10.244.0.3:37423 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.058799401s
	[INFO] 10.244.0.3:57961 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.010090318s
	[INFO] 10.244.0.3:37799 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.012765436s
	[INFO] 10.244.0.3:46499 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000078364s
	[INFO] 10.244.0.3:42436 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011216992s
	[INFO] 10.244.0.3:35880 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000144767s
	[INFO] 10.244.0.3:39224 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104006s
	[INFO] 10.244.0.3:48536 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.013324615s
	[INFO] 10.244.0.3:55841 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000221823s
	[INFO] 10.244.0.3:46712 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000111417s
	[INFO] 10.244.0.3:51982 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099744s
	[INFO] 10.244.0.3:55425 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080184s
	[INFO] 10.244.0.3:58084 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119904s
	[INFO] 10.244.0.3:57892 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049065s
	[INFO] 10.244.0.3:52329 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000049128s
	[INFO] 10.244.0.3:60384 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000083319s
	[INFO] 10.244.0.3:51923 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000058598s
	[INFO] 10.244.0.3:37985 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00007256s
	[INFO] 10.244.0.3:45792 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000071025s
	
	
	==> describe nodes <==
	Name:               multinode-985000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-985000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=multinode-985000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T16_21_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:21:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-985000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:35:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:33:23 +0000   Mon, 05 Aug 2024 23:21:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:33:23 +0000   Mon, 05 Aug 2024 23:21:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:33:23 +0000   Mon, 05 Aug 2024 23:21:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:33:23 +0000   Mon, 05 Aug 2024 23:21:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.13
	  Hostname:    multinode-985000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 43d0d80c8ac846e58ac4351481e2a76f
	  System UUID:                3ac6443b-0000-0000-898d-9b152fa64288
	  Boot ID:                    382df761-aca3-4a9d-bdce-655bf0444398
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-44k5g                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-7db6d8ff4d-fqtll                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-985000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-tvtvg                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-985000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-985000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-fwgw7                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-985000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node multinode-985000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node multinode-985000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node multinode-985000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node multinode-985000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node multinode-985000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node multinode-985000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node multinode-985000 event: Registered Node multinode-985000 in Controller
	  Normal  NodeReady                13m                kubelet          Node multinode-985000 status is now: NodeReady
	
	
	Name:               multinode-985000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-985000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=multinode-985000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T16_34_49_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:34:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-985000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:35:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:35:11 +0000   Mon, 05 Aug 2024 23:34:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:35:11 +0000   Mon, 05 Aug 2024 23:34:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:35:11 +0000   Mon, 05 Aug 2024 23:34:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:35:11 +0000   Mon, 05 Aug 2024 23:35:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.15
	  Hostname:    multinode-985000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 826016b56497466499a1ccf530c0b20a
	  System UUID:                f79c425f-0000-0000-b959-1b18fd31916b
	  Boot ID:                    e2b098c4-c586-45f3-bd88-3d2d31770824
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ptd5b    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-5kfjr              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      29s
	  kube-system                 kube-proxy-s65dd           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  NodeHasSufficientMemory  30s (x2 over 30s)  kubelet          Node multinode-985000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s (x2 over 30s)  kubelet          Node multinode-985000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s (x2 over 30s)  kubelet          Node multinode-985000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  30s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           29s                node-controller  Node multinode-985000-m03 event: Registered Node multinode-985000-m03 in Controller
	  Normal  NodeReady                7s                 kubelet          Node multinode-985000-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.261909] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.788416] systemd-fstab-generator[490]: Ignoring "noauto" option for root device
	[  +0.099076] systemd-fstab-generator[502]: Ignoring "noauto" option for root device
	[  +1.730104] systemd-fstab-generator[841]: Ignoring "noauto" option for root device
	[  +0.293514] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.050985] kauditd_printk_skb: 95 callbacks suppressed
	[  +0.056812] systemd-fstab-generator[892]: Ignoring "noauto" option for root device
	[  +0.126132] systemd-fstab-generator[906]: Ignoring "noauto" option for root device
	[  +2.458612] systemd-fstab-generator[1120]: Ignoring "noauto" option for root device
	[  +0.104830] systemd-fstab-generator[1132]: Ignoring "noauto" option for root device
	[  +0.110549] systemd-fstab-generator[1144]: Ignoring "noauto" option for root device
	[  +0.128910] systemd-fstab-generator[1159]: Ignoring "noauto" option for root device
	[  +3.841948] systemd-fstab-generator[1259]: Ignoring "noauto" option for root device
	[  +0.049995] kauditd_printk_skb: 180 callbacks suppressed
	[  +2.575866] systemd-fstab-generator[1508]: Ignoring "noauto" option for root device
	[  +3.513702] systemd-fstab-generator[1689]: Ignoring "noauto" option for root device
	[  +0.052965] kauditd_printk_skb: 70 callbacks suppressed
	[Aug 5 23:21] systemd-fstab-generator[2095]: Ignoring "noauto" option for root device
	[  +0.093506] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.997559] systemd-fstab-generator[2287]: Ignoring "noauto" option for root device
	[  +0.103967] kauditd_printk_skb: 12 callbacks suppressed
	[ +16.210215] kauditd_printk_skb: 60 callbacks suppressed
	[Aug 5 23:22] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [1fdd85b796ab] <==
	{"level":"info","ts":"2024-08-05T23:21:02.190598Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T23:21:02.190621Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T23:21:02.179152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 switched to configuration voters=(16152458731666035825)"}
	{"level":"info","ts":"2024-08-05T23:21:02.190761Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","added-peer-id":"e0290fa3161c5471","added-peer-peer-urls":["https://192.169.0.13:2380"]}
	{"level":"info","ts":"2024-08-05T23:21:02.845352Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-05T23:21:02.84543Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-05T23:21:02.845462Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgPreVoteResp from e0290fa3161c5471 at term 1"}
	{"level":"info","ts":"2024-08-05T23:21:02.845512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became candidate at term 2"}
	{"level":"info","ts":"2024-08-05T23:21:02.845532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgVoteResp from e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-08-05T23:21:02.845548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became leader at term 2"}
	{"level":"info","ts":"2024-08-05T23:21:02.845562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e0290fa3161c5471 elected leader e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-08-05T23:21:02.849595Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.851787Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e0290fa3161c5471","local-member-attributes":"{Name:multinode-985000 ClientURLs:[https://192.169.0.13:2379]}","request-path":"/0/members/e0290fa3161c5471/attributes","cluster-id":"87b46e718846f146","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T23:21:02.852037Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:21:02.855611Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	{"level":"info","ts":"2024-08-05T23:21:02.856003Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.856059Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.85615Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.863221Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:21:02.86336Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T23:21:02.863406Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T23:21:02.864495Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-05T23:31:02.914901Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":684}
	{"level":"info","ts":"2024-08-05T23:31:02.918154Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":684,"took":"2.558785ms","hash":2682644219,"current-db-size-bytes":2088960,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2088960,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-08-05T23:31:02.918199Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2682644219,"revision":684,"compact-revision":-1}
	
	
	==> kernel <==
	 23:35:18 up 14 min,  0 users,  load average: 0.45, 0.18, 0.11
	Linux multinode-985000 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [724e5cfab0a2] <==
	I0805 23:33:54.988562       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:33:54.988724       1 main.go:299] handling current node
	I0805 23:34:04.990678       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:34:04.991047       1 main.go:299] handling current node
	I0805 23:34:14.989462       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:34:14.989592       1 main.go:299] handling current node
	I0805 23:34:24.989135       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:34:24.989269       1 main.go:299] handling current node
	I0805 23:34:34.997631       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:34:34.997789       1 main.go:299] handling current node
	I0805 23:34:44.997368       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:34:44.997416       1 main.go:299] handling current node
	I0805 23:34:54.992568       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:34:54.992629       1 main.go:299] handling current node
	I0805 23:34:54.992643       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:34:54.992648       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.1.0/24] 
	I0805 23:34:54.992876       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.169.0.15 Flags: [] Table: 0} 
	I0805 23:35:04.990312       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:35:04.990398       1 main.go:299] handling current node
	I0805 23:35:04.990506       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:35:04.990544       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.1.0/24] 
	I0805 23:35:14.988650       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:35:14.988669       1 main.go:299] handling current node
	I0805 23:35:14.988679       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:35:14.988682       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [608878b33f35] <==
	I0805 23:21:04.097032       1 aggregator.go:165] initial CRD sync complete...
	I0805 23:21:04.097038       1 autoregister_controller.go:141] Starting autoregister controller
	I0805 23:21:04.097041       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0805 23:21:04.097046       1 cache.go:39] Caches are synced for autoregister controller
	I0805 23:21:04.110976       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0805 23:21:04.964782       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0805 23:21:04.969492       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0805 23:21:04.969592       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0805 23:21:05.293407       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0805 23:21:05.318630       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0805 23:21:05.372930       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0805 23:21:05.377089       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.13]
	I0805 23:21:05.377814       1 controller.go:615] quota admission added evaluator for: endpoints
	I0805 23:21:05.381896       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0805 23:21:06.014220       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0805 23:21:06.529594       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0805 23:21:06.534785       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0805 23:21:06.541889       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0805 23:21:20.069451       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0805 23:21:20.168118       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0805 23:34:22.712021       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52583: use of closed network connection
	E0805 23:34:23.040370       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52588: use of closed network connection
	E0805 23:34:23.352264       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52593: use of closed network connection
	E0805 23:34:26.444399       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52624: use of closed network connection
	E0805 23:34:26.631411       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52626: use of closed network connection
	
	
	==> kube-controller-manager [d11865076c64] <==
	I0805 23:21:20.453666       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.448745ms"
	I0805 23:21:20.454853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="1.144243ms"
	I0805 23:21:20.787054       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="47.481389ms"
	I0805 23:21:20.817469       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.368774ms"
	I0805 23:21:20.817550       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="43.975µs"
	I0805 23:21:35.878200       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="31.077µs"
	I0805 23:21:35.888778       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.967µs"
	I0805 23:21:37.680305       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="64.353µs"
	I0805 23:21:37.699191       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="7.51419ms"
	I0805 23:21:37.699276       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.856µs"
	I0805 23:21:39.419986       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0805 23:22:57.139604       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.652844ms"
	I0805 23:22:57.152479       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.645403ms"
	I0805 23:22:57.161837       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.312944ms"
	I0805 23:22:57.161913       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.986µs"
	I0805 23:22:59.131878       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.268042ms"
	I0805 23:22:59.132399       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.529µs"
	I0805 23:34:49.118620       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-985000-m03\" does not exist"
	I0805 23:34:49.123685       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-985000-m03" podCIDRs=["10.244.1.0/24"]
	I0805 23:34:49.553799       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-985000-m03"
	I0805 23:35:12.244278       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-985000-m03"
	I0805 23:35:12.252224       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.969µs"
	I0805 23:35:12.259725       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.754µs"
	I0805 23:35:14.267796       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.716009ms"
	I0805 23:35:14.267862       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.069µs"
	
	
	==> kube-proxy [d58ca48f9f8b] <==
	I0805 23:21:21.029929       1 server_linux.go:69] "Using iptables proxy"
	I0805 23:21:21.072929       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.13"]
	I0805 23:21:21.105532       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 23:21:21.105552       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 23:21:21.105563       1 server_linux.go:165] "Using iptables Proxier"
	I0805 23:21:21.107493       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 23:21:21.107594       1 server.go:872] "Version info" version="v1.30.3"
	I0805 23:21:21.107602       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:21:21.108477       1 config.go:192] "Starting service config controller"
	I0805 23:21:21.108482       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 23:21:21.108492       1 config.go:101] "Starting endpoint slice config controller"
	I0805 23:21:21.108494       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 23:21:21.108784       1 config.go:319] "Starting node config controller"
	I0805 23:21:21.108789       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 23:21:21.209420       1 shared_informer.go:320] Caches are synced for node config
	I0805 23:21:21.209474       1 shared_informer.go:320] Caches are synced for service config
	I0805 23:21:21.209501       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [792feba1a6f6] <==
	E0805 23:21:04.024310       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:04.024229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0805 23:21:04.024017       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 23:21:04.024329       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 23:21:04.024047       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 23:21:04.024362       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0805 23:21:04.024118       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 23:21:04.024431       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0805 23:21:04.860871       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:04.861069       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0805 23:21:04.959895       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0805 23:21:04.959949       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0805 23:21:04.962444       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 23:21:04.962496       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0805 23:21:04.968410       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 23:21:04.968452       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 23:21:05.030527       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0805 23:21:05.030566       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 23:21:05.076451       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:05.076659       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0805 23:21:05.118159       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:05.118676       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0805 23:21:05.141945       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 23:21:05.142020       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0805 23:21:08.218627       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 05 23:31:06 multinode-985000 kubelet[2102]: E0805 23:31:06.388949    2102 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:31:06 multinode-985000 kubelet[2102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:31:06 multinode-985000 kubelet[2102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:31:06 multinode-985000 kubelet[2102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:31:06 multinode-985000 kubelet[2102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:32:06 multinode-985000 kubelet[2102]: E0805 23:32:06.388091    2102 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:32:06 multinode-985000 kubelet[2102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:32:06 multinode-985000 kubelet[2102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:32:06 multinode-985000 kubelet[2102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:32:06 multinode-985000 kubelet[2102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:33:06 multinode-985000 kubelet[2102]: E0805 23:33:06.388876    2102 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:33:06 multinode-985000 kubelet[2102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:33:06 multinode-985000 kubelet[2102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:33:06 multinode-985000 kubelet[2102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:33:06 multinode-985000 kubelet[2102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:34:06 multinode-985000 kubelet[2102]: E0805 23:34:06.388016    2102 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:34:06 multinode-985000 kubelet[2102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:34:06 multinode-985000 kubelet[2102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:34:06 multinode-985000 kubelet[2102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:34:06 multinode-985000 kubelet[2102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:35:06 multinode-985000 kubelet[2102]: E0805 23:35:06.389737    2102 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:35:06 multinode-985000 kubelet[2102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:35:06 multinode-985000 kubelet[2102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:35:06 multinode-985000 kubelet[2102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:35:06 multinode-985000 kubelet[2102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-985000 -n multinode-985000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-985000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/CopyFile (2.76s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (11.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p multinode-985000 node stop m03: (8.33225252s)
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-985000 status: exit status 7 (249.637886ms)

                                                
                                                
-- stdout --
	multinode-985000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-985000-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-985000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-985000 status --alsologtostderr: exit status 7 (246.240451ms)

                                                
                                                
-- stdout --
	multinode-985000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-985000-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-985000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:35:28.372993    5352 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:35:28.373269    5352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:35:28.373276    5352 out.go:304] Setting ErrFile to fd 2...
	I0805 16:35:28.373300    5352 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:35:28.373478    5352 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:35:28.373660    5352 out.go:298] Setting JSON to false
	I0805 16:35:28.373683    5352 mustload.go:65] Loading cluster: multinode-985000
	I0805 16:35:28.373724    5352 notify.go:220] Checking for updates...
	I0805 16:35:28.373982    5352 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:35:28.373998    5352 status.go:255] checking status of multinode-985000 ...
	I0805 16:35:28.374361    5352 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:35:28.374417    5352 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:35:28.383058    5352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52816
	I0805 16:35:28.383416    5352 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:35:28.383903    5352 main.go:141] libmachine: Using API Version  1
	I0805 16:35:28.383919    5352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:35:28.384127    5352 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:35:28.384230    5352 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:35:28.384328    5352 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:35:28.384391    5352 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:35:28.385332    5352 status.go:330] multinode-985000 host status = "Running" (err=<nil>)
	I0805 16:35:28.385353    5352 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:35:28.385592    5352 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:35:28.385615    5352 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:35:28.393831    5352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52818
	I0805 16:35:28.394155    5352 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:35:28.394492    5352 main.go:141] libmachine: Using API Version  1
	I0805 16:35:28.394507    5352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:35:28.394710    5352 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:35:28.394814    5352 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:35:28.394917    5352 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:35:28.395147    5352 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:35:28.395167    5352 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:35:28.404059    5352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52820
	I0805 16:35:28.404417    5352 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:35:28.404730    5352 main.go:141] libmachine: Using API Version  1
	I0805 16:35:28.404740    5352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:35:28.404947    5352 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:35:28.405041    5352 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:35:28.405169    5352 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:35:28.405194    5352 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:35:28.405288    5352 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:35:28.405401    5352 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:35:28.405485    5352 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:35:28.405566    5352 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:35:28.438159    5352 ssh_runner.go:195] Run: systemctl --version
	I0805 16:35:28.442479    5352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:35:28.453267    5352 kubeconfig.go:125] found "multinode-985000" server: "https://192.169.0.13:8443"
	I0805 16:35:28.453293    5352 api_server.go:166] Checking apiserver status ...
	I0805 16:35:28.453331    5352 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:35:28.463804    5352 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1977/cgroup
	W0805 16:35:28.471242    5352 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1977/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 16:35:28.471284    5352 ssh_runner.go:195] Run: ls
	I0805 16:35:28.474541    5352 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:35:28.477792    5352 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0805 16:35:28.477802    5352 status.go:422] multinode-985000 apiserver status = Running (err=<nil>)
	I0805 16:35:28.477811    5352 status.go:257] multinode-985000 status: &{Name:multinode-985000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:35:28.477822    5352 status.go:255] checking status of multinode-985000-m02 ...
	I0805 16:35:28.478099    5352 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:35:28.478119    5352 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:35:28.486811    5352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52824
	I0805 16:35:28.487169    5352 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:35:28.487502    5352 main.go:141] libmachine: Using API Version  1
	I0805 16:35:28.487513    5352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:35:28.487737    5352 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:35:28.487846    5352 main.go:141] libmachine: (multinode-985000-m02) Calling .GetState
	I0805 16:35:28.487942    5352 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:35:28.488025    5352 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:35:28.489012    5352 status.go:330] multinode-985000-m02 host status = "Running" (err=<nil>)
	I0805 16:35:28.489022    5352 host.go:66] Checking if "multinode-985000-m02" exists ...
	I0805 16:35:28.489274    5352 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:35:28.489298    5352 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:35:28.497945    5352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52826
	I0805 16:35:28.498289    5352 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:35:28.498637    5352 main.go:141] libmachine: Using API Version  1
	I0805 16:35:28.498658    5352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:35:28.498859    5352 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:35:28.498974    5352 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:35:28.499063    5352 host.go:66] Checking if "multinode-985000-m02" exists ...
	I0805 16:35:28.499314    5352 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:35:28.499340    5352 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:35:28.507673    5352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52828
	I0805 16:35:28.508000    5352 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:35:28.508347    5352 main.go:141] libmachine: Using API Version  1
	I0805 16:35:28.508363    5352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:35:28.508547    5352 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:35:28.508647    5352 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:35:28.508762    5352 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:35:28.508774    5352 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:35:28.508857    5352 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:35:28.508932    5352 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:35:28.509019    5352 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:35:28.509106    5352 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:35:28.543236    5352 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:35:28.554355    5352 status.go:257] multinode-985000-m02 status: &{Name:multinode-985000-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:35:28.554370    5352 status.go:255] checking status of multinode-985000-m03 ...
	I0805 16:35:28.554636    5352 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:35:28.554656    5352 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:35:28.563156    5352 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52831
	I0805 16:35:28.563496    5352 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:35:28.563800    5352 main.go:141] libmachine: Using API Version  1
	I0805 16:35:28.563810    5352 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:35:28.564004    5352 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:35:28.564123    5352 main.go:141] libmachine: (multinode-985000-m03) Calling .GetState
	I0805 16:35:28.564205    5352 main.go:141] libmachine: (multinode-985000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:35:28.564274    5352 main.go:141] libmachine: (multinode-985000-m03) DBG | hyperkit pid from json: 5266
	I0805 16:35:28.565246    5352 main.go:141] libmachine: (multinode-985000-m03) DBG | hyperkit pid 5266 missing from process table
	I0805 16:35:28.565270    5352 status.go:330] multinode-985000-m03 host status = "Stopped" (err=<nil>)
	I0805 16:35:28.565277    5352 status.go:343] host is not running, skipping remaining checks
	I0805 16:35:28.565284    5352 status.go:257] multinode-985000-m03 status: &{Name:multinode-985000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-amd64 -p multinode-985000 status --alsologtostderr": multinode-985000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
multinode-985000-m02
type: Worker
host: Running
kubelet: Stopped

                                                
                                                
multinode-985000-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
multinode_test.go:275: incorrect number of stopped kubelets: args "out/minikube-darwin-amd64 -p multinode-985000 status --alsologtostderr": multinode-985000
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
multinode-985000-m02
type: Worker
host: Running
kubelet: Stopped

                                                
                                                
multinode-985000-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000
helpers_test.go:244: <<< TestMultiNode/serial/StopNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-985000 logs -n 25: (2.038983257s)
helpers_test.go:252: TestMultiNode/serial/StopNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| kubectl | -p multinode-985000 -- apply -f                   | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:22 PDT | 05 Aug 24 16:22 PDT |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- rollout                    | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:22 PDT |                     |
	|         | status deployment/busybox                         |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:32 PDT | 05 Aug 24 16:32 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g --                        |                  |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT |                     |
	|         | busybox-fc5497c4f-ptd5b --                        |                  |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g --                        |                  |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT |                     |
	|         | busybox-fc5497c4f-ptd5b --                        |                  |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g -- nslookup               |                  |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT |                     |
	|         | busybox-fc5497c4f-ptd5b -- nslookup               |                  |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o                | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g                           |                  |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g -- sh                     |                  |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1                          |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec                       | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT |                     |
	|         | busybox-fc5497c4f-ptd5b                           |                  |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |         |         |                     |                     |
	| node    | add -p multinode-985000 -v 3                      | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:35 PDT |
	|         | --alsologtostderr                                 |                  |         |         |                     |                     |
	| node    | multinode-985000 node stop m03                    | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:35 PDT | 05 Aug 24 16:35 PDT |
	|---------|---------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 16:20:32
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 16:20:32.303800    4640 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:20:32.303980    4640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:20:32.303986    4640 out.go:304] Setting ErrFile to fd 2...
	I0805 16:20:32.303990    4640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:20:32.304163    4640 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:20:32.305609    4640 out.go:298] Setting JSON to false
	I0805 16:20:32.329307    4640 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3003,"bootTime":1722897029,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0805 16:20:32.329400    4640 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:20:32.351877    4640 out.go:177] * [multinode-985000] minikube v1.33.1 on Darwin 14.5
	I0805 16:20:32.392940    4640 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:20:32.393020    4640 notify.go:220] Checking for updates...
	I0805 16:20:32.435775    4640 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:20:32.456783    4640 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0805 16:20:32.477872    4640 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:20:32.499010    4640 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:20:32.519936    4640 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:20:32.541363    4640 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:20:32.571784    4640 out.go:177] * Using the hyperkit driver based on user configuration
	I0805 16:20:32.613992    4640 start.go:297] selected driver: hyperkit
	I0805 16:20:32.614020    4640 start.go:901] validating driver "hyperkit" against <nil>
	I0805 16:20:32.614042    4640 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:20:32.618322    4640 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:20:32.618456    4640 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19373-1122/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0805 16:20:32.627075    4640 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0805 16:20:32.631391    4640 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:20:32.631417    4640 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0805 16:20:32.631452    4640 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 16:20:32.631678    4640 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:20:32.631709    4640 cni.go:84] Creating CNI manager for ""
	I0805 16:20:32.631719    4640 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0805 16:20:32.631730    4640 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0805 16:20:32.631823    4640 start.go:340] cluster config:
	{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:20:32.631925    4640 iso.go:125] acquiring lock: {Name:mk71e8d40232ece83c91dc82184f03ab93aee56e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:20:32.673756    4640 out.go:177] * Starting "multinode-985000" primary control-plane node in "multinode-985000" cluster
	I0805 16:20:32.695001    4640 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:20:32.695088    4640 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0805 16:20:32.695107    4640 cache.go:56] Caching tarball of preloaded images
	I0805 16:20:32.695319    4640 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:20:32.695338    4640 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:20:32.695809    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:20:32.695848    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json: {Name:mk470c2e849a0c86ee251e86e74d9f6dfdb47dad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:32.696485    4640 start.go:360] acquireMachinesLock for multinode-985000: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:20:32.696593    4640 start.go:364] duration metric: took 88.666µs to acquireMachinesLock for "multinode-985000"
	I0805 16:20:32.696646    4640 start.go:93] Provisioning new machine with config: &{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:20:32.696745    4640 start.go:125] createHost starting for "" (driver="hyperkit")
	I0805 16:20:32.718059    4640 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 16:20:32.718351    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:20:32.718416    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:20:32.728195    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52477
	I0805 16:20:32.728547    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:20:32.728938    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:20:32.728948    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:20:32.729147    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:20:32.729251    4640 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:20:32.729369    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:32.729498    4640 start.go:159] libmachine.API.Create for "multinode-985000" (driver="hyperkit")
	I0805 16:20:32.729521    4640 client.go:168] LocalClient.Create starting
	I0805 16:20:32.729556    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem
	I0805 16:20:32.729608    4640 main.go:141] libmachine: Decoding PEM data...
	I0805 16:20:32.729625    4640 main.go:141] libmachine: Parsing certificate...
	I0805 16:20:32.729685    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem
	I0805 16:20:32.729724    4640 main.go:141] libmachine: Decoding PEM data...
	I0805 16:20:32.729737    4640 main.go:141] libmachine: Parsing certificate...
	I0805 16:20:32.729749    4640 main.go:141] libmachine: Running pre-create checks...
	I0805 16:20:32.729760    4640 main.go:141] libmachine: (multinode-985000) Calling .PreCreateCheck
	I0805 16:20:32.729840    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:32.729974    4640 main.go:141] libmachine: (multinode-985000) Calling .GetConfigRaw
	I0805 16:20:32.739224    4640 main.go:141] libmachine: Creating machine...
	I0805 16:20:32.739247    4640 main.go:141] libmachine: (multinode-985000) Calling .Create
	I0805 16:20:32.739475    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:32.739754    4640 main.go:141] libmachine: (multinode-985000) DBG | I0805 16:20:32.739457    4648 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:20:32.739852    4640 main.go:141] libmachine: (multinode-985000) Downloading /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1122/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 16:20:32.920622    4640 main.go:141] libmachine: (multinode-985000) DBG | I0805 16:20:32.920524    4648 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa...
	I0805 16:20:32.957084    4640 main.go:141] libmachine: (multinode-985000) DBG | I0805 16:20:32.957005    4648 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/multinode-985000.rawdisk...
	I0805 16:20:32.957123    4640 main.go:141] libmachine: (multinode-985000) DBG | Writing magic tar header
	I0805 16:20:32.957134    4640 main.go:141] libmachine: (multinode-985000) DBG | Writing SSH key tar header
	I0805 16:20:32.957531    4640 main.go:141] libmachine: (multinode-985000) DBG | I0805 16:20:32.957490    4648 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000 ...
	I0805 16:20:33.331110    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:33.331140    4640 main.go:141] libmachine: (multinode-985000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid
	I0805 16:20:33.331159    4640 main.go:141] libmachine: (multinode-985000) DBG | Using UUID 3ac698fc-f622-443b-898d-9b152fa64288
	I0805 16:20:33.442582    4640 main.go:141] libmachine: (multinode-985000) DBG | Generated MAC e2:6:14:d2:13:ae
	I0805 16:20:33.442603    4640 main.go:141] libmachine: (multinode-985000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000
	I0805 16:20:33.442636    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3ac698fc-f622-443b-898d-9b152fa64288", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0805 16:20:33.442669    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3ac698fc-f622-443b-898d-9b152fa64288", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0805 16:20:33.442719    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "3ac698fc-f622-443b-898d-9b152fa64288", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/multinode-985000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage,/Users/jenkins/minikube-integration/1937
3-1122/.minikube/machines/multinode-985000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"}
	I0805 16:20:33.442758    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 3ac698fc-f622-443b-898d-9b152fa64288 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/multinode-985000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"
	I0805 16:20:33.442774    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:20:33.445733    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: Pid is 4651
	I0805 16:20:33.446145    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 0
	I0805 16:20:33.446167    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:33.446227    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:33.447073    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:33.447135    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:33.447152    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:33.447186    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:33.447202    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:33.447214    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:33.447222    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:33.447229    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:33.447247    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:33.447269    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:33.447287    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:33.447304    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:33.447321    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:33.453446    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:20:33.506623    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:20:33.507268    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:20:33.507283    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:20:33.507290    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:20:33.507298    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:20:33.891346    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:20:33.891387    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:20:34.006163    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:20:34.006177    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:20:34.006189    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:20:34.006208    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:20:34.007050    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:20:34.007082    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:20:35.448624    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 1
	I0805 16:20:35.448640    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:35.448724    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:35.449516    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:35.449591    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:35.449607    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:35.449619    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:35.449625    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:35.449648    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:35.449664    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:35.449695    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:35.449711    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:35.449719    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:35.449725    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:35.449731    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:35.449738    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:37.449834    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 2
	I0805 16:20:37.449851    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:37.449867    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:37.450676    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:37.450690    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:37.450697    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:37.450707    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:37.450722    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:37.450733    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:37.450744    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:37.450754    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:37.450771    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:37.450784    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:37.450797    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:37.450809    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:37.450819    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:39.451161    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 3
	I0805 16:20:39.451179    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:39.451277    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:39.452025    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:39.452066    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:39.452089    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:39.452104    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:39.452124    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:39.452141    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:39.452154    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:39.452161    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:39.452167    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:39.452183    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:39.452195    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:39.452202    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:39.452211    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:39.592041    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:39 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:20:39.592070    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:39 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:20:39.592076    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:39 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:20:39.615760    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:39 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:20:41.452210    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 4
	I0805 16:20:41.452225    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:41.452325    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:41.453101    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:41.453153    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:41.453162    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:41.453169    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:41.453178    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:41.453187    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:41.453194    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:41.453200    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:41.453219    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:41.453231    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:41.453241    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:41.453250    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:41.453258    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:43.455148    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 5
	I0805 16:20:43.455166    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:43.455244    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:43.456059    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:43.456103    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:20:43.456115    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:20:43.456122    4640 main.go:141] libmachine: (multinode-985000) DBG | Found match: e2:6:14:d2:13:ae
	I0805 16:20:43.456127    4640 main.go:141] libmachine: (multinode-985000) DBG | IP: 192.169.0.13
	I0805 16:20:43.456181    4640 main.go:141] libmachine: (multinode-985000) Calling .GetConfigRaw
	I0805 16:20:43.456781    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:43.456879    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:43.456972    4640 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 16:20:43.456985    4640 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:20:43.457082    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:43.457144    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:43.457907    4640 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 16:20:43.457917    4640 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 16:20:43.457923    4640 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 16:20:43.457927    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:43.458023    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:43.458126    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:43.458255    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:43.458346    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:43.458472    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:43.458676    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:43.458683    4640 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 16:20:44.513424    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:20:44.513443    4640 main.go:141] libmachine: Detecting the provisioner...
	I0805 16:20:44.513452    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:44.513594    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:44.513694    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.513791    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.513876    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:44.513996    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:44.514158    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:44.514165    4640 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 16:20:44.573082    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 16:20:44.573142    4640 main.go:141] libmachine: found compatible host: buildroot
	I0805 16:20:44.573149    4640 main.go:141] libmachine: Provisioning with buildroot...
	I0805 16:20:44.573155    4640 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:20:44.573299    4640 buildroot.go:166] provisioning hostname "multinode-985000"
	I0805 16:20:44.573311    4640 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:20:44.573416    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:44.573499    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:44.573585    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.573680    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.573795    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:44.573922    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:44.574068    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:44.574076    4640 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-985000 && echo "multinode-985000" | sudo tee /etc/hostname
	I0805 16:20:44.637872    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-985000
	
	I0805 16:20:44.637892    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:44.638029    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:44.638132    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.638218    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.638297    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:44.638429    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:44.638562    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:44.638582    4640 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-985000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-985000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-985000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:20:44.698340    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:20:44.698360    4640 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:20:44.698377    4640 buildroot.go:174] setting up certificates
	I0805 16:20:44.698389    4640 provision.go:84] configureAuth start
	I0805 16:20:44.698397    4640 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:20:44.698544    4640 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:20:44.698658    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:44.698750    4640 provision.go:143] copyHostCerts
	I0805 16:20:44.698781    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:20:44.698850    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:20:44.698858    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:20:44.699001    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:20:44.699205    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:20:44.699246    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:20:44.699250    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:20:44.699341    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:20:44.699482    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:20:44.699528    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:20:44.699533    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:20:44.699615    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:20:44.699756    4640 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.multinode-985000 san=[127.0.0.1 192.169.0.13 localhost minikube multinode-985000]
	I0805 16:20:45.028860    4640 provision.go:177] copyRemoteCerts
	I0805 16:20:45.028920    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:20:45.028938    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:45.029080    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:45.029180    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.029338    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:45.029452    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:20:45.063652    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:20:45.063724    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:20:45.083743    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:20:45.083800    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0805 16:20:45.103791    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:20:45.103863    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:20:45.123716    4640 provision.go:87] duration metric: took 425.312704ms to configureAuth
	I0805 16:20:45.123731    4640 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:20:45.123881    4640 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:20:45.123894    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:45.124028    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:45.124115    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:45.124206    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.124285    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.124381    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:45.124503    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:45.124632    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:45.124639    4640 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:20:45.176256    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:20:45.176269    4640 buildroot.go:70] root file system type: tmpfs
	I0805 16:20:45.176337    4640 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:20:45.176350    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:45.176482    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:45.176580    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.176695    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.176782    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:45.176911    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:45.177045    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:45.177090    4640 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:20:45.240992    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:20:45.241023    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:45.241166    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:45.241270    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.241382    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.241469    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:45.241590    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:45.241743    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:45.241755    4640 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:20:46.765402    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:20:46.765418    4640 main.go:141] libmachine: Checking connection to Docker...
	I0805 16:20:46.765424    4640 main.go:141] libmachine: (multinode-985000) Calling .GetURL
	I0805 16:20:46.765563    4640 main.go:141] libmachine: Docker is up and running!
	I0805 16:20:46.765570    4640 main.go:141] libmachine: Reticulating splines...
	I0805 16:20:46.765575    4640 client.go:171] duration metric: took 14.036043683s to LocalClient.Create
	I0805 16:20:46.765592    4640 start.go:167] duration metric: took 14.036090848s to libmachine.API.Create "multinode-985000"
	I0805 16:20:46.765602    4640 start.go:293] postStartSetup for "multinode-985000" (driver="hyperkit")
	I0805 16:20:46.765609    4640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:20:46.765620    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.765765    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:20:46.765778    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:46.765878    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:46.765972    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.766070    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:46.766168    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:20:46.808597    4640 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:20:46.814840    4640 command_runner.go:130] > NAME=Buildroot
	I0805 16:20:46.814852    4640 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0805 16:20:46.814856    4640 command_runner.go:130] > ID=buildroot
	I0805 16:20:46.814869    4640 command_runner.go:130] > VERSION_ID=2023.02.9
	I0805 16:20:46.814873    4640 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0805 16:20:46.814969    4640 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:20:46.814985    4640 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:20:46.815099    4640 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:20:46.815290    4640 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:20:46.815297    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:20:46.815526    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:20:46.832473    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:20:46.852626    4640 start.go:296] duration metric: took 87.015317ms for postStartSetup
	I0805 16:20:46.852653    4640 main.go:141] libmachine: (multinode-985000) Calling .GetConfigRaw
	I0805 16:20:46.853264    4640 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:20:46.853417    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:20:46.853762    4640 start.go:128] duration metric: took 14.156998155s to createHost
	I0805 16:20:46.853776    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:46.853870    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:46.853964    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.854078    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.854160    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:46.854284    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:46.854405    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:46.854413    4640 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 16:20:46.906137    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722900047.071906799
	
	I0805 16:20:46.906149    4640 fix.go:216] guest clock: 1722900047.071906799
	I0805 16:20:46.906154    4640 fix.go:229] Guest: 2024-08-05 16:20:47.071906799 -0700 PDT Remote: 2024-08-05 16:20:46.85377 -0700 PDT m=+14.585721958 (delta=218.136799ms)
	I0805 16:20:46.906178    4640 fix.go:200] guest clock delta is within tolerance: 218.136799ms
	I0805 16:20:46.906182    4640 start.go:83] releasing machines lock for "multinode-985000", held for 14.209573761s
	I0805 16:20:46.906200    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.906321    4640 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:20:46.906429    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.906734    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.906832    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.906917    4640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:20:46.906947    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:46.906977    4640 ssh_runner.go:195] Run: cat /version.json
	I0805 16:20:46.906987    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:46.907036    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:46.907080    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:46.907105    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.907167    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.907190    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:46.907251    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:46.907285    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:20:46.907353    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:20:46.936969    4640 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0805 16:20:46.937263    4640 ssh_runner.go:195] Run: systemctl --version
	I0805 16:20:46.992747    4640 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0805 16:20:46.993626    4640 command_runner.go:130] > systemd 252 (252)
	I0805 16:20:46.993660    4640 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0805 16:20:46.993799    4640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 16:20:46.998949    4640 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0805 16:20:46.998969    4640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:20:46.999002    4640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:20:47.012276    4640 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0805 16:20:47.012544    4640 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:20:47.012556    4640 start.go:495] detecting cgroup driver to use...
	I0805 16:20:47.012657    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:20:47.027593    4640 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0805 16:20:47.027660    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:20:47.035836    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:20:47.044911    4640 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:20:47.044968    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:20:47.053571    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:20:47.061858    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:20:47.070031    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:20:47.078524    4640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:20:47.087870    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:20:47.096303    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:20:47.104482    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:20:47.112756    4640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:20:47.120033    4640 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0805 16:20:47.120127    4640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:20:47.128644    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:47.220387    4640 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:20:47.239567    4640 start.go:495] detecting cgroup driver to use...
	I0805 16:20:47.239642    4640 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:20:47.254939    4640 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0805 16:20:47.255001    4640 command_runner.go:130] > [Unit]
	I0805 16:20:47.255011    4640 command_runner.go:130] > Description=Docker Application Container Engine
	I0805 16:20:47.255015    4640 command_runner.go:130] > Documentation=https://docs.docker.com
	I0805 16:20:47.255020    4640 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0805 16:20:47.255026    4640 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0805 16:20:47.255030    4640 command_runner.go:130] > StartLimitBurst=3
	I0805 16:20:47.255034    4640 command_runner.go:130] > StartLimitIntervalSec=60
	I0805 16:20:47.255037    4640 command_runner.go:130] > [Service]
	I0805 16:20:47.255041    4640 command_runner.go:130] > Type=notify
	I0805 16:20:47.255055    4640 command_runner.go:130] > Restart=on-failure
	I0805 16:20:47.255063    4640 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0805 16:20:47.255073    4640 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0805 16:20:47.255080    4640 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0805 16:20:47.255088    4640 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0805 16:20:47.255094    4640 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0805 16:20:47.255099    4640 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0805 16:20:47.255112    4640 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0805 16:20:47.255120    4640 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0805 16:20:47.255128    4640 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0805 16:20:47.255134    4640 command_runner.go:130] > ExecStart=
	I0805 16:20:47.255164    4640 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0805 16:20:47.255172    4640 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0805 16:20:47.255182    4640 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0805 16:20:47.255189    4640 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0805 16:20:47.255193    4640 command_runner.go:130] > LimitNOFILE=infinity
	I0805 16:20:47.255196    4640 command_runner.go:130] > LimitNPROC=infinity
	I0805 16:20:47.255200    4640 command_runner.go:130] > LimitCORE=infinity
	I0805 16:20:47.255205    4640 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0805 16:20:47.255209    4640 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0805 16:20:47.255212    4640 command_runner.go:130] > TasksMax=infinity
	I0805 16:20:47.255215    4640 command_runner.go:130] > TimeoutStartSec=0
	I0805 16:20:47.255220    4640 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0805 16:20:47.255225    4640 command_runner.go:130] > Delegate=yes
	I0805 16:20:47.255230    4640 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0805 16:20:47.255233    4640 command_runner.go:130] > KillMode=process
	I0805 16:20:47.255236    4640 command_runner.go:130] > [Install]
	I0805 16:20:47.255259    4640 command_runner.go:130] > WantedBy=multi-user.target
	I0805 16:20:47.255324    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:20:47.269909    4640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:20:47.286027    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:20:47.296365    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:20:47.306405    4640 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:20:47.369760    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:20:47.379998    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:20:47.394696    4640 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0805 16:20:47.394951    4640 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:20:47.397850    4640 command_runner.go:130] > /usr/bin/cri-dockerd
	I0805 16:20:47.398038    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:20:47.406063    4640 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:20:47.419537    4640 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:20:47.514227    4640 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:20:47.637079    4640 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:20:47.637156    4640 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:20:47.651314    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:47.748259    4640 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:20:50.076345    4640 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.32806615s)
	I0805 16:20:50.076407    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0805 16:20:50.086580    4640 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0805 16:20:50.099944    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:20:50.110410    4640 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0805 16:20:50.206329    4640 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0805 16:20:50.317239    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:50.417670    4640 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0805 16:20:50.431617    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:20:50.443305    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:50.555307    4640 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0805 16:20:50.610408    4640 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0805 16:20:50.610481    4640 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0805 16:20:50.614751    4640 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0805 16:20:50.614762    4640 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0805 16:20:50.614767    4640 command_runner.go:130] > Device: 0,22	Inode: 806         Links: 1
	I0805 16:20:50.614772    4640 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0805 16:20:50.614775    4640 command_runner.go:130] > Access: 2024-08-05 23:20:50.735793184 +0000
	I0805 16:20:50.614784    4640 command_runner.go:130] > Modify: 2024-08-05 23:20:50.735793184 +0000
	I0805 16:20:50.614789    4640 command_runner.go:130] > Change: 2024-08-05 23:20:50.736793062 +0000
	I0805 16:20:50.614792    4640 command_runner.go:130] >  Birth: -
	I0805 16:20:50.614829    4640 start.go:563] Will wait 60s for crictl version
	I0805 16:20:50.614890    4640 ssh_runner.go:195] Run: which crictl
	I0805 16:20:50.617807    4640 command_runner.go:130] > /usr/bin/crictl
	I0805 16:20:50.617933    4640 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 16:20:50.644026    4640 command_runner.go:130] > Version:  0.1.0
	I0805 16:20:50.644070    4640 command_runner.go:130] > RuntimeName:  docker
	I0805 16:20:50.644117    4640 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0805 16:20:50.644195    4640 command_runner.go:130] > RuntimeApiVersion:  v1
	I0805 16:20:50.645396    4640 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0805 16:20:50.645460    4640 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:20:50.661131    4640 command_runner.go:130] > 27.1.1
	I0805 16:20:50.662194    4640 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:20:50.677860    4640 command_runner.go:130] > 27.1.1
	I0805 16:20:50.700872    4640 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0805 16:20:50.700922    4640 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:20:50.701316    4640 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0805 16:20:50.706154    4640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:20:50.715610    4640 kubeadm.go:883] updating cluster {Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 16:20:50.715677    4640 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:20:50.715736    4640 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:20:50.733572    4640 docker.go:685] Got preloaded images: 
	I0805 16:20:50.733584    4640 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0805 16:20:50.733634    4640 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 16:20:50.741005    4640 command_runner.go:139] > {"Repositories":{}}
	I0805 16:20:50.741090    4640 ssh_runner.go:195] Run: which lz4
	I0805 16:20:50.744527    4640 command_runner.go:130] > /usr/bin/lz4
	I0805 16:20:50.744558    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0805 16:20:50.744692    4640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 16:20:50.747718    4640 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 16:20:50.747836    4640 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 16:20:50.747851    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0805 16:20:51.865752    4640 docker.go:649] duration metric: took 1.121114736s to copy over tarball
	I0805 16:20:51.865833    4640 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 16:20:54.241811    4640 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.375959074s)
	I0805 16:20:54.241825    4640 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 16:20:54.267125    4640 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 16:20:54.275283    4640 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.3":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.3":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.3":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d2
89d99da794784d1"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.3":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0805 16:20:54.275373    4640 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0805 16:20:54.288931    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:54.386395    4640 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:20:56.795159    4640 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.408741228s)
	I0805 16:20:56.795248    4640 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:20:56.808093    4640 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0805 16:20:56.808107    4640 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0805 16:20:56.808111    4640 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0805 16:20:56.808116    4640 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0805 16:20:56.808120    4640 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0805 16:20:56.808123    4640 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0805 16:20:56.808128    4640 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0805 16:20:56.808135    4640 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:20:56.809018    4640 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0805 16:20:56.809035    4640 cache_images.go:84] Images are preloaded, skipping loading
	I0805 16:20:56.809048    4640 kubeadm.go:934] updating node { 192.169.0.13 8443 v1.30.3 docker true true} ...
	I0805 16:20:56.809127    4640 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-985000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 16:20:56.809195    4640 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0805 16:20:56.847007    4640 command_runner.go:130] > cgroupfs
	I0805 16:20:56.847610    4640 cni.go:84] Creating CNI manager for ""
	I0805 16:20:56.847620    4640 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 16:20:56.847630    4640 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 16:20:56.847650    4640 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.13 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-985000 NodeName:multinode-985000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 16:20:56.847744    4640 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-985000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 16:20:56.847807    4640 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 16:20:56.855919    4640 command_runner.go:130] > kubeadm
	I0805 16:20:56.855931    4640 command_runner.go:130] > kubectl
	I0805 16:20:56.855934    4640 command_runner.go:130] > kubelet
	I0805 16:20:56.855959    4640 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 16:20:56.856010    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 16:20:56.863284    4640 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0805 16:20:56.876753    4640 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 16:20:56.890292    4640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0805 16:20:56.904628    4640 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I0805 16:20:56.907711    4640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:20:56.917108    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:57.013172    4640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:20:57.028650    4640 certs.go:68] Setting up /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000 for IP: 192.169.0.13
	I0805 16:20:57.028663    4640 certs.go:194] generating shared ca certs ...
	I0805 16:20:57.028674    4640 certs.go:226] acquiring lock for ca certs: {Name:mkb83e058d89c7d4e66f4136f377a3c305b13735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.028863    4640 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key
	I0805 16:20:57.028935    4640 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key
	I0805 16:20:57.028946    4640 certs.go:256] generating profile certs ...
	I0805 16:20:57.028995    4640 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key
	I0805 16:20:57.029007    4640 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt with IP's: []
	I0805 16:20:57.088127    4640 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt ...
	I0805 16:20:57.088142    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt: {Name:mkb7087fa165ae496621b10df42dfd2f8603360a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.088531    4640 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key ...
	I0805 16:20:57.088540    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key: {Name:mk37e627de9c39a2300d317d721ebf92a202a17e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.088775    4640 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec
	I0805 16:20:57.088790    4640 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt.5b7978ec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.13]
	I0805 16:20:57.189318    4640 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt.5b7978ec ...
	I0805 16:20:57.189336    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt.5b7978ec: {Name:mkb4501af4f6db766eb719de2f42fc564a23d2d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.189653    4640 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec ...
	I0805 16:20:57.189669    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec: {Name:mke641ddecfc5629bb592a5b6321d446ed3b31bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.189903    4640 certs.go:381] copying /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt.5b7978ec -> /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt
	I0805 16:20:57.190140    4640 certs.go:385] copying /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec -> /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key
	I0805 16:20:57.190318    4640 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key
	I0805 16:20:57.190336    4640 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt with IP's: []
	I0805 16:20:57.386717    4640 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt ...
	I0805 16:20:57.386733    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt: {Name:mk486344c8c5b8383e5349f68a995b553e8d31c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.387043    4640 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key ...
	I0805 16:20:57.387052    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key: {Name:mk2b24e1a5e962e12395adf21e4f6ad64901ee0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.387278    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 16:20:57.387306    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 16:20:57.387325    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 16:20:57.387349    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 16:20:57.387368    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 16:20:57.387391    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 16:20:57.387411    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 16:20:57.387432    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 16:20:57.387531    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem (1338 bytes)
	W0805 16:20:57.387583    4640 certs.go:480] ignoring /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678_empty.pem, impossibly tiny 0 bytes
	I0805 16:20:57.387591    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 16:20:57.387621    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem (1082 bytes)
	I0805 16:20:57.387656    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem (1123 bytes)
	I0805 16:20:57.387684    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem (1675 bytes)
	I0805 16:20:57.387747    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:20:57.387781    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem -> /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.387803    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.387822    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.388188    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 16:20:57.408800    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 16:20:57.429927    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 16:20:57.449924    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0805 16:20:57.470736    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0805 16:20:57.490564    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 16:20:57.511342    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 16:20:57.531190    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 16:20:57.551984    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem --> /usr/share/ca-certificates/1678.pem (1338 bytes)
	I0805 16:20:57.571601    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /usr/share/ca-certificates/16782.pem (1708 bytes)
	I0805 16:20:57.592369    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 16:20:57.611866    4640 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 16:20:57.626527    4640 ssh_runner.go:195] Run: openssl version
	I0805 16:20:57.630504    4640 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0805 16:20:57.630711    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1678.pem && ln -fs /usr/share/ca-certificates/1678.pem /etc/ssl/certs/1678.pem"
	I0805 16:20:57.638913    4640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.642115    4640 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  5 22:58 /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.642280    4640 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 22:58 /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.642315    4640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.646345    4640 command_runner.go:130] > 51391683
	I0805 16:20:57.646544    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1678.pem /etc/ssl/certs/51391683.0"
	I0805 16:20:57.654953    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16782.pem && ln -fs /usr/share/ca-certificates/16782.pem /etc/ssl/certs/16782.pem"
	I0805 16:20:57.663842    4640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.667242    4640 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  5 22:58 /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.667258    4640 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 22:58 /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.667300    4640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.671438    4640 command_runner.go:130] > 3ec20f2e
	I0805 16:20:57.671648    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16782.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 16:20:57.679692    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 16:20:57.688061    4640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.691411    4640 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.691493    4640 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.691531    4640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.695572    4640 command_runner.go:130] > b5213941
	I0805 16:20:57.695754    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 16:20:57.704703    4640 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 16:20:57.707752    4640 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 16:20:57.707872    4640 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 16:20:57.707921    4640 kubeadm.go:392] StartCluster: {Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:20:57.708054    4640 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 16:20:57.720408    4640 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 16:20:57.731114    4640 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0805 16:20:57.731128    4640 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0805 16:20:57.731133    4640 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0805 16:20:57.731194    4640 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 16:20:57.739645    4640 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 16:20:57.751095    4640 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0805 16:20:57.751108    4640 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0805 16:20:57.751113    4640 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0805 16:20:57.751120    4640 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 16:20:57.751266    4640 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 16:20:57.751273    4640 kubeadm.go:157] found existing configuration files:
	
	I0805 16:20:57.751324    4640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 16:20:57.759086    4640 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 16:20:57.759185    4640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 16:20:57.759233    4640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 16:20:57.769060    4640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 16:20:57.778103    4640 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 16:20:57.778143    4640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 16:20:57.778190    4640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 16:20:57.786612    4640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 16:20:57.794733    4640 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 16:20:57.794754    4640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 16:20:57.794796    4640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 16:20:57.802671    4640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 16:20:57.810242    4640 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 16:20:57.810264    4640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 16:20:57.810299    4640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 16:20:57.818339    4640 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 16:20:57.890449    4640 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0805 16:20:57.890461    4640 command_runner.go:130] > [init] Using Kubernetes version: v1.30.3
	I0805 16:20:57.890501    4640 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 16:20:57.890507    4640 command_runner.go:130] > [preflight] Running pre-flight checks
	I0805 16:20:57.984851    4640 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 16:20:57.984855    4640 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 16:20:57.984956    4640 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 16:20:57.984962    4640 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 16:20:57.985041    4640 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 16:20:57.985038    4640 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 16:20:58.152965    4640 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 16:20:58.152995    4640 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 16:20:58.175785    4640 out.go:204]   - Generating certificates and keys ...
	I0805 16:20:58.175840    4640 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0805 16:20:58.175851    4640 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 16:20:58.175914    4640 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0805 16:20:58.175920    4640 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 16:20:58.229002    4640 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0805 16:20:58.229016    4640 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0805 16:20:58.322701    4640 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0805 16:20:58.322717    4640 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0805 16:20:58.394063    4640 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0805 16:20:58.394077    4640 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0805 16:20:58.601975    4640 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0805 16:20:58.601995    4640 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0805 16:20:58.821056    4640 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0805 16:20:58.821065    4640 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0805 16:20:58.821204    4640 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-985000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0805 16:20:58.821214    4640 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-985000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0805 16:20:59.150811    4640 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0805 16:20:59.150817    4640 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0805 16:20:59.151036    4640 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-985000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0805 16:20:59.151046    4640 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-985000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0805 16:20:59.206073    4640 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0805 16:20:59.206088    4640 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0805 16:20:59.294956    4640 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0805 16:20:59.294966    4640 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0805 16:20:59.348591    4640 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0805 16:20:59.348602    4640 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0805 16:20:59.348788    4640 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 16:20:59.348797    4640 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 16:20:59.511379    4640 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 16:20:59.511395    4640 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 16:20:59.789652    4640 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 16:20:59.789666    4640 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 16:20:59.965508    4640 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 16:20:59.965517    4640 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 16:21:00.208268    4640 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 16:21:00.208284    4640 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 16:21:00.402575    4640 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 16:21:00.402582    4640 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 16:21:00.409122    4640 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 16:21:00.409137    4640 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 16:21:00.410639    4640 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 16:21:00.410652    4640 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 16:21:00.430944    4640 out.go:204]   - Booting up control plane ...
	I0805 16:21:00.431017    4640 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 16:21:00.431032    4640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 16:21:00.431106    4640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 16:21:00.431106    4640 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 16:21:00.431174    4640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 16:21:00.431182    4640 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 16:21:00.431274    4640 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 16:21:00.431286    4640 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 16:21:00.431361    4640 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 16:21:00.431369    4640 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 16:21:00.431399    4640 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 16:21:00.431405    4640 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0805 16:21:00.540991    4640 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 16:21:00.541004    4640 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 16:21:00.541076    4640 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 16:21:00.541081    4640 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 16:21:01.042556    4640 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.719164ms
	I0805 16:21:01.042573    4640 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.719164ms
	I0805 16:21:01.042632    4640 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 16:21:01.042639    4640 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 16:21:05.541995    4640 kubeadm.go:310] [api-check] The API server is healthy after 4.502407968s
	I0805 16:21:05.542014    4640 command_runner.go:130] > [api-check] The API server is healthy after 4.502407968s
	I0805 16:21:05.551474    4640 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 16:21:05.551486    4640 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 16:21:05.558278    4640 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 16:21:05.558284    4640 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 16:21:05.572116    4640 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 16:21:05.572130    4640 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0805 16:21:05.572281    4640 kubeadm.go:310] [mark-control-plane] Marking the node multinode-985000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 16:21:05.572292    4640 command_runner.go:130] > [mark-control-plane] Marking the node multinode-985000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 16:21:05.579214    4640 kubeadm.go:310] [bootstrap-token] Using token: 0mwls8.ribzsy6ooov2flu0
	I0805 16:21:05.579225    4640 command_runner.go:130] > [bootstrap-token] Using token: 0mwls8.ribzsy6ooov2flu0
	I0805 16:21:05.613851    4640 out.go:204]   - Configuring RBAC rules ...
	I0805 16:21:05.613974    4640 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 16:21:05.613988    4640 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 16:21:05.655317    4640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 16:21:05.655329    4640 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 16:21:05.659733    4640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 16:21:05.659737    4640 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 16:21:05.661608    4640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 16:21:05.661619    4640 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 16:21:05.663605    4640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 16:21:05.663612    4640 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 16:21:05.665771    4640 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 16:21:05.665778    4640 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 16:21:05.947572    4640 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 16:21:05.947585    4640 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 16:21:06.357765    4640 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 16:21:06.357776    4640 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0805 16:21:06.946930    4640 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 16:21:06.946942    4640 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0805 16:21:06.947937    4640 kubeadm.go:310] 
	I0805 16:21:06.947989    4640 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 16:21:06.947996    4640 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0805 16:21:06.948000    4640 kubeadm.go:310] 
	I0805 16:21:06.948071    4640 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 16:21:06.948080    4640 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0805 16:21:06.948088    4640 kubeadm.go:310] 
	I0805 16:21:06.948121    4640 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 16:21:06.948125    4640 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0805 16:21:06.948179    4640 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 16:21:06.948187    4640 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 16:21:06.948229    4640 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 16:21:06.948234    4640 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 16:21:06.948237    4640 kubeadm.go:310] 
	I0805 16:21:06.948284    4640 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 16:21:06.948302    4640 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0805 16:21:06.948309    4640 kubeadm.go:310] 
	I0805 16:21:06.948354    4640 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 16:21:06.948367    4640 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 16:21:06.948375    4640 kubeadm.go:310] 
	I0805 16:21:06.948414    4640 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 16:21:06.948418    4640 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0805 16:21:06.948479    4640 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 16:21:06.948488    4640 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 16:21:06.948558    4640 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 16:21:06.948564    4640 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 16:21:06.948570    4640 kubeadm.go:310] 
	I0805 16:21:06.948633    4640 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 16:21:06.948638    4640 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0805 16:21:06.948701    4640 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 16:21:06.948708    4640 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0805 16:21:06.948715    4640 kubeadm.go:310] 
	I0805 16:21:06.948788    4640 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0mwls8.ribzsy6ooov2flu0 \
	I0805 16:21:06.948795    4640 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 0mwls8.ribzsy6ooov2flu0 \
	I0805 16:21:06.948879    4640 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:524477c6809305b6c0c2d082a15767bdfc04953bf05f4ba28f6a5db30aba8adf \
	I0805 16:21:06.948886    4640 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:524477c6809305b6c0c2d082a15767bdfc04953bf05f4ba28f6a5db30aba8adf \
	I0805 16:21:06.948905    4640 kubeadm.go:310] 	--control-plane 
	I0805 16:21:06.948911    4640 command_runner.go:130] > 	--control-plane 
	I0805 16:21:06.948916    4640 kubeadm.go:310] 
	I0805 16:21:06.948980    4640 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 16:21:06.948984    4640 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0805 16:21:06.948987    4640 kubeadm.go:310] 
	I0805 16:21:06.949052    4640 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0mwls8.ribzsy6ooov2flu0 \
	I0805 16:21:06.949057    4640 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 0mwls8.ribzsy6ooov2flu0 \
	I0805 16:21:06.949136    4640 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:524477c6809305b6c0c2d082a15767bdfc04953bf05f4ba28f6a5db30aba8adf 
	I0805 16:21:06.949141    4640 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:524477c6809305b6c0c2d082a15767bdfc04953bf05f4ba28f6a5db30aba8adf 
	I0805 16:21:06.949613    4640 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 16:21:06.949621    4640 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 16:21:06.949644    4640 cni.go:84] Creating CNI manager for ""
	I0805 16:21:06.949649    4640 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 16:21:06.972147    4640 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0805 16:21:07.030449    4640 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0805 16:21:07.036220    4640 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0805 16:21:07.036233    4640 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0805 16:21:07.036239    4640 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0805 16:21:07.036249    4640 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0805 16:21:07.036254    4640 command_runner.go:130] > Access: 2024-08-05 23:20:43.694299549 +0000
	I0805 16:21:07.036259    4640 command_runner.go:130] > Modify: 2024-07-29 16:10:03.000000000 +0000
	I0805 16:21:07.036264    4640 command_runner.go:130] > Change: 2024-08-05 23:20:41.058596444 +0000
	I0805 16:21:07.036266    4640 command_runner.go:130] >  Birth: -
	I0805 16:21:07.036368    4640 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0805 16:21:07.036375    4640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0805 16:21:07.050414    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0805 16:21:07.243070    4640 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0805 16:21:07.246445    4640 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0805 16:21:07.250670    4640 command_runner.go:130] > serviceaccount/kindnet created
	I0805 16:21:07.255971    4640 command_runner.go:130] > daemonset.apps/kindnet created
	I0805 16:21:07.257424    4640 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 16:21:07.257500    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-985000 minikube.k8s.io/updated_at=2024_08_05T16_21_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4 minikube.k8s.io/name=multinode-985000 minikube.k8s.io/primary=true
	I0805 16:21:07.257502    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:07.266956    4640 command_runner.go:130] > -16
	I0805 16:21:07.267023    4640 ops.go:34] apiserver oom_adj: -16
	I0805 16:21:07.390396    4640 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0805 16:21:07.392070    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:07.400579    4640 command_runner.go:130] > node/multinode-985000 labeled
	I0805 16:21:07.456213    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:07.893323    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:07.956622    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:08.392391    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:08.450793    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:08.892411    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:08.950456    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:09.393238    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:09.450291    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:09.892156    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:09.951159    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:10.393019    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:10.451734    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:10.893100    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:10.954360    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:11.393009    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:11.452879    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:11.894187    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:11.953480    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:12.392194    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:12.452444    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:12.894265    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:12.955367    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:13.392882    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:13.455680    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:13.892568    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:13.950195    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:14.393254    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:14.452940    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:14.892187    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:14.948447    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:15.392762    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:15.451815    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:15.892531    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:15.952781    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:16.393008    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:16.454659    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:16.892423    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:16.957989    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:17.392489    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:17.452653    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:17.892453    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:17.953809    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:18.392692    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:18.450726    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:18.893940    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:18.957266    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:19.393402    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:19.452345    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:19.892761    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:19.952524    4640 command_runner.go:130] > NAME      SECRETS   AGE
	I0805 16:21:19.952537    4640 command_runner.go:130] > default   0         1s
	I0805 16:21:19.952551    4640 kubeadm.go:1113] duration metric: took 12.695106906s to wait for elevateKubeSystemPrivileges
	I0805 16:21:19.952568    4640 kubeadm.go:394] duration metric: took 22.244643678s to StartCluster
	I0805 16:21:19.952584    4640 settings.go:142] acquiring lock: {Name:mk564a817a54ecf2aef16a4d2309e85208c0231f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:21:19.952678    4640 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:21:19.953130    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/kubeconfig: {Name:mk2a0d8b4d330b3c26432fc65d015ddf98a9cc93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:21:19.953387    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0805 16:21:19.953391    4640 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:21:19.953437    4640 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 16:21:19.953474    4640 addons.go:69] Setting storage-provisioner=true in profile "multinode-985000"
	I0805 16:21:19.953501    4640 addons.go:234] Setting addon storage-provisioner=true in "multinode-985000"
	I0805 16:21:19.953507    4640 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:21:19.953501    4640 addons.go:69] Setting default-storageclass=true in profile "multinode-985000"
	I0805 16:21:19.953520    4640 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:21:19.953542    4640 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-985000"
	I0805 16:21:19.953772    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.953787    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.953870    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.953897    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.962985    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52500
	I0805 16:21:19.963341    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52502
	I0805 16:21:19.963365    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.963645    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.963722    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.963735    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.963997    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.964004    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.964027    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.964249    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.964372    4640 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:21:19.964430    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.964458    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:19.964465    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.964535    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:21:19.966651    4640 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:21:19.966874    4640 kapi.go:59] client config for multinode-985000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xed05060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:21:19.967275    4640 cert_rotation.go:137] Starting client certificate rotation controller
	I0805 16:21:19.967411    4640 addons.go:234] Setting addon default-storageclass=true in "multinode-985000"
	I0805 16:21:19.967434    4640 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:21:19.967665    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.967688    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.973226    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52504
	I0805 16:21:19.973568    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.973922    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.973942    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.974163    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.974282    4640 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:21:19.974363    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:19.974444    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:21:19.975405    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:21:19.975491    4640 out.go:177] * Verifying Kubernetes components...
	I0805 16:21:19.976182    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52506
	I0805 16:21:19.976461    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.976795    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.976812    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.976999    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.977392    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.977409    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.986027    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52508
	I0805 16:21:19.986361    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.986712    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.986741    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.986959    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.987071    4640 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:21:19.987149    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:19.987227    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:21:19.988179    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:21:19.988299    4640 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 16:21:19.988307    4640 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 16:21:19.988315    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:21:19.988395    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:21:19.988484    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:21:19.988568    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:21:19.988639    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:21:20.032241    4640 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:21:20.032361    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:21:20.069496    4640 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 16:21:20.069510    4640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 16:21:20.069530    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:21:20.069717    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:21:20.069824    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:21:20.069935    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:21:20.070041    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:21:20.084762    4640 command_runner.go:130] > apiVersion: v1
	I0805 16:21:20.084775    4640 command_runner.go:130] > data:
	I0805 16:21:20.084779    4640 command_runner.go:130] >   Corefile: |
	I0805 16:21:20.084782    4640 command_runner.go:130] >     .:53 {
	I0805 16:21:20.084785    4640 command_runner.go:130] >         errors
	I0805 16:21:20.084790    4640 command_runner.go:130] >         health {
	I0805 16:21:20.084794    4640 command_runner.go:130] >            lameduck 5s
	I0805 16:21:20.084796    4640 command_runner.go:130] >         }
	I0805 16:21:20.084812    4640 command_runner.go:130] >         ready
	I0805 16:21:20.084822    4640 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0805 16:21:20.084829    4640 command_runner.go:130] >            pods insecure
	I0805 16:21:20.084833    4640 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0805 16:21:20.084841    4640 command_runner.go:130] >            ttl 30
	I0805 16:21:20.084853    4640 command_runner.go:130] >         }
	I0805 16:21:20.084863    4640 command_runner.go:130] >         prometheus :9153
	I0805 16:21:20.084868    4640 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0805 16:21:20.084880    4640 command_runner.go:130] >            max_concurrent 1000
	I0805 16:21:20.084884    4640 command_runner.go:130] >         }
	I0805 16:21:20.084887    4640 command_runner.go:130] >         cache 30
	I0805 16:21:20.084898    4640 command_runner.go:130] >         loop
	I0805 16:21:20.084902    4640 command_runner.go:130] >         reload
	I0805 16:21:20.084905    4640 command_runner.go:130] >         loadbalance
	I0805 16:21:20.084908    4640 command_runner.go:130] >     }
	I0805 16:21:20.084911    4640 command_runner.go:130] > kind: ConfigMap
	I0805 16:21:20.084914    4640 command_runner.go:130] > metadata:
	I0805 16:21:20.084921    4640 command_runner.go:130] >   creationTimestamp: "2024-08-05T23:21:06Z"
	I0805 16:21:20.084926    4640 command_runner.go:130] >   name: coredns
	I0805 16:21:20.084929    4640 command_runner.go:130] >   namespace: kube-system
	I0805 16:21:20.084933    4640 command_runner.go:130] >   resourceVersion: "266"
	I0805 16:21:20.084937    4640 command_runner.go:130] >   uid: 5057af03-8824-4e67-a4b6-ef90c1ded7ce
	I0805 16:21:20.085056    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0805 16:21:20.184335    4640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:21:20.203408    4640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 16:21:20.278639    4640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 16:21:20.507141    4640 command_runner.go:130] > configmap/coredns replaced
	I0805 16:21:20.511660    4640 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0805 16:21:20.511929    4640 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:21:20.511932    4640 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:21:20.512124    4640 kapi.go:59] client config for multinode-985000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xed05060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:21:20.512125    4640 kapi.go:59] client config for multinode-985000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xed05060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:21:20.512341    4640 node_ready.go:35] waiting up to 6m0s for node "multinode-985000" to be "Ready" ...
	I0805 16:21:20.512409    4640 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0805 16:21:20.512416    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.512423    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.512424    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:20.512428    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.512430    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.512438    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.512446    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.520076    4640 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0805 16:21:20.520087    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.520092    4640 round_trippers.go:580]     Audit-Id: 304f14c4-a466-4fb6-b401-b28f4df4dfa1
	I0805 16:21:20.520095    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.520103    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.520107    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.520111    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.520113    4640 round_trippers.go:580]     Content-Length: 291
	I0805 16:21:20.520117    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.521443    4640 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0805 16:21:20.521456    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.521464    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.521474    4640 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7bdcac2f-ecae-4bb5-9dd4-4f2479d63a63","resourceVersion":"381","creationTimestamp":"2024-08-05T23:21:06Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0805 16:21:20.521479    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.521487    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.521502    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.521511    4640 round_trippers.go:580]     Audit-Id: bcd9e393-6b08-4ffb-a73b-6e7c430f0212
	I0805 16:21:20.521518    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.521831    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:20.521865    4640 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7bdcac2f-ecae-4bb5-9dd4-4f2479d63a63","resourceVersion":"381","creationTimestamp":"2024-08-05T23:21:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0805 16:21:20.521904    4640 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0805 16:21:20.521914    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.521921    4640 round_trippers.go:473]     Content-Type: application/json
	I0805 16:21:20.521930    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.521935    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.530726    4640 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0805 16:21:20.530739    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.530744    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.530748    4640 round_trippers.go:580]     Content-Length: 291
	I0805 16:21:20.530751    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.530754    4640 round_trippers.go:580]     Audit-Id: ba15a3b2-b69b-473e-a331-81e01385ad47
	I0805 16:21:20.530756    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.530758    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.530761    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.530773    4640 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7bdcac2f-ecae-4bb5-9dd4-4f2479d63a63","resourceVersion":"383","creationTimestamp":"2024-08-05T23:21:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0805 16:21:20.588534    4640 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0805 16:21:20.588563    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.588570    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.588737    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.588752    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.588765    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.588764    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.588772    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.588919    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.588920    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.588931    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.589012    4640 round_trippers.go:463] GET https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses
	I0805 16:21:20.589020    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.589028    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.589034    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.597496    4640 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0805 16:21:20.597508    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.597513    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.597518    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.597521    4640 round_trippers.go:580]     Content-Length: 1273
	I0805 16:21:20.597523    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.597525    4640 round_trippers.go:580]     Audit-Id: d7394cfc-1eb3-4623-8a7f-a5088a0398c8
	I0805 16:21:20.597527    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.597530    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.597844    4640 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"391"},"items":[{"metadata":{"name":"standard","uid":"34b9c98b-1b12-420a-8576-fd00c496f57b","resourceVersion":"387","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0805 16:21:20.598117    4640 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"34b9c98b-1b12-420a-8576-fd00c496f57b","resourceVersion":"387","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0805 16:21:20.598145    4640 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0805 16:21:20.598150    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.598157    4640 round_trippers.go:473]     Content-Type: application/json
	I0805 16:21:20.598166    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.598171    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.619819    4640 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0805 16:21:20.619836    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.619842    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.619846    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.619849    4640 round_trippers.go:580]     Content-Length: 1220
	I0805 16:21:20.619852    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.619855    4640 round_trippers.go:580]     Audit-Id: 299d4cc8-0cb5-4dd5-80b3-5d54592ecd90
	I0805 16:21:20.619859    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.619861    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.619898    4640 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"34b9c98b-1b12-420a-8576-fd00c496f57b","resourceVersion":"387","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0805 16:21:20.619983    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.619992    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.620141    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.620153    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.620166    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.750372    4640 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0805 16:21:20.753871    4640 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0805 16:21:20.759257    4640 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0805 16:21:20.767575    4640 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0805 16:21:20.774745    4640 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0805 16:21:20.786454    4640 command_runner.go:130] > pod/storage-provisioner created
	I0805 16:21:20.787838    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.787851    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.788087    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.788087    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.788098    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.788109    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.788117    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.788261    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.788280    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.788280    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.811467    4640 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0805 16:21:20.871433    4640 addons.go:510] duration metric: took 917.995637ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0805 16:21:21.014507    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:21.014532    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:21.014545    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:21.014553    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:21.014605    4640 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0805 16:21:21.014619    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:21.014631    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:21.014638    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:21.017465    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:21.017464    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:21.017480    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:21.017492    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:21.017492    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:21.017496    4640 round_trippers.go:580]     Content-Length: 291
	I0805 16:21:21.017502    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:21 GMT
	I0805 16:21:21.017504    4640 round_trippers.go:580]     Audit-Id: fb264fed-80ee-469b-a34e-7b1e8460f94b
	I0805 16:21:21.017506    4640 round_trippers.go:580]     Audit-Id: c9362211-8dfc-4385-87db-76c6486df53e
	I0805 16:21:21.017512    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:21.017513    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:21.017518    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:21.017519    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:21.017522    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:21.017524    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:21.017529    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:21.017545    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:21 GMT
	I0805 16:21:21.017616    4640 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7bdcac2f-ecae-4bb5-9dd4-4f2479d63a63","resourceVersion":"395","creationTimestamp":"2024-08-05T23:21:06Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0805 16:21:21.017684    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:21.017735    4640 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-985000" context rescaled to 1 replicas
	I0805 16:21:21.514170    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:21.514200    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:21.514219    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:21.514226    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:21.516804    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:21.516819    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:21.516826    4640 round_trippers.go:580]     Audit-Id: 9396255c-231d-48cb-a53f-22663307b969
	I0805 16:21:21.516830    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:21.516834    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:21.516839    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:21.516849    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:21.516854    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:21 GMT
	I0805 16:21:21.516951    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:22.013275    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:22.013299    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:22.013311    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:22.013319    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:22.016138    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:22.016155    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:22.016163    4640 round_trippers.go:580]     Audit-Id: cc869aef-9ab4-4a7f-8835-cce2afa76dd9
	I0805 16:21:22.016168    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:22.016175    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:22.016182    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:22.016187    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:22.016193    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:22 GMT
	I0805 16:21:22.016497    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:22.512546    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:22.512561    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:22.512567    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:22.512572    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:22.515381    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:22.515393    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:22.515401    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:22.515407    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:22.515412    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:22.515416    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:22 GMT
	I0805 16:21:22.515420    4640 round_trippers.go:580]     Audit-Id: e7d470a0-7df5-4d85-9bb5-cbf15cfa989f
	I0805 16:21:22.515423    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:22.515634    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:22.515838    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:23.012594    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:23.012606    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:23.012612    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:23.012616    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:23.014085    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:23.014095    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:23.014101    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:23.014104    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:23.014107    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:23.014109    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:23.014113    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:23 GMT
	I0805 16:21:23.014116    4640 round_trippers.go:580]     Audit-Id: e12d5034-3bd9-498b-844e-12133805ded9
	I0805 16:21:23.014306    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:23.513150    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:23.513163    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:23.513168    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:23.513172    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:23.514595    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:23.514604    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:23.514610    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:23.514614    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:23.514617    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:23.514619    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:23.514622    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:23 GMT
	I0805 16:21:23.514635    4640 round_trippers.go:580]     Audit-Id: 2bc52e3b-1575-453f-87fa-51f4301a9426
	I0805 16:21:23.514871    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:24.012814    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:24.012826    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:24.012832    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:24.012835    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:24.014366    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:24.014379    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:24.014384    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:24.014388    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:24.014406    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:24.014411    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:24.014414    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:24 GMT
	I0805 16:21:24.014417    4640 round_trippers.go:580]     Audit-Id: f14d8611-e5e1-45fe-92f3-95559148c71b
	I0805 16:21:24.014572    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:24.513607    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:24.513620    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:24.513626    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:24.513629    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:24.515210    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:24.515220    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:24.515242    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:24.515253    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:24.515260    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:24.515264    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:24.515268    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:24 GMT
	I0805 16:21:24.515271    4640 round_trippers.go:580]     Audit-Id: 0a897d84-d437-4212-b36d-e414fedf55d4
	I0805 16:21:24.515427    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:25.013253    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:25.013272    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:25.013283    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:25.013321    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:25.015275    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:25.015308    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:25.015317    4640 round_trippers.go:580]     Audit-Id: ced7b45c-a072-4322-89ab-d0cc21ddfb1d
	I0805 16:21:25.015322    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:25.015325    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:25.015328    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:25.015332    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:25.015336    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:25 GMT
	I0805 16:21:25.015627    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:25.015849    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:25.512881    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:25.512902    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:25.512914    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:25.512920    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:25.515502    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:25.515517    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:25.515524    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:25.515529    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:25.515534    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:25.515538    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:25.515542    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:25 GMT
	I0805 16:21:25.515545    4640 round_trippers.go:580]     Audit-Id: dd6b59c1-dde3-4d67-b446-8823ad717d4f
	I0805 16:21:25.515665    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:26.013787    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:26.013811    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:26.013824    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:26.013830    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:26.016420    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:26.016440    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:26.016463    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:26 GMT
	I0805 16:21:26.016470    4640 round_trippers.go:580]     Audit-Id: 19939705-2879-44e6-830c-0c86394087ed
	I0805 16:21:26.016473    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:26.016485    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:26.016490    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:26.016494    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:26.016965    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:26.512523    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:26.512536    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:26.512541    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:26.512544    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:26.514158    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:26.514167    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:26.514172    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:26.514176    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:26.514179    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:26.514182    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:26 GMT
	I0805 16:21:26.514184    4640 round_trippers.go:580]     Audit-Id: f2346665-2701-41e1-94b0-41a70aa2f170
	I0805 16:21:26.514187    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:26.514489    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:27.013107    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:27.013136    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:27.013148    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:27.013155    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:27.015615    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:27.015632    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:27.015639    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:27 GMT
	I0805 16:21:27.015655    4640 round_trippers.go:580]     Audit-Id: 6abee22d-c1db-48e9-99db-e07791ed571f
	I0805 16:21:27.015661    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:27.015664    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:27.015667    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:27.015672    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:27.015747    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:27.015996    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:27.513549    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:27.513570    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:27.513582    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:27.513589    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:27.516173    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:27.516189    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:27.516197    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:27.516200    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:27.516204    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:27.516209    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:27 GMT
	I0805 16:21:27.516212    4640 round_trippers.go:580]     Audit-Id: a227585b-ae23-4bd1-b1dc-643eadd970cc
	I0805 16:21:27.516215    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:27.516416    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:28.014104    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:28.014132    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:28.014143    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:28.014159    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:28.016690    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:28.016705    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:28.016713    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:28.016717    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:28.016721    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:28.016725    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:28.016728    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:28 GMT
	I0805 16:21:28.016731    4640 round_trippers.go:580]     Audit-Id: 0d14831c-cc1f-41a9-a252-85e191b9594d
	I0805 16:21:28.016834    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:28.512703    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:28.512726    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:28.512739    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:28.512747    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:28.515176    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:28.515190    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:28.515197    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:28 GMT
	I0805 16:21:28.515201    4640 round_trippers.go:580]     Audit-Id: 6af459f8-bb08-43bf-ac7f-51ccacd5d664
	I0805 16:21:28.515206    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:28.515211    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:28.515215    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:28.515219    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:28.515378    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:29.013324    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:29.013354    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:29.013360    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:29.013364    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:29.014793    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:29.014804    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:29.014809    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:29.014813    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:29 GMT
	I0805 16:21:29.014817    4640 round_trippers.go:580]     Audit-Id: 2e50ff34-0c55-4136-b537-eee73f73706d
	I0805 16:21:29.014819    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:29.014822    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:29.014826    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:29.015098    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:29.513802    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:29.513832    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:29.513844    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:29.513852    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:29.516479    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:29.516496    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:29.516504    4640 round_trippers.go:580]     Audit-Id: bcbc3920-26b4-45f4-b91a-ce0e3dc11770
	I0805 16:21:29.516529    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:29.516538    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:29.516544    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:29.516549    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:29.516554    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:29 GMT
	I0805 16:21:29.516682    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:29.516938    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:30.013325    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:30.013349    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:30.013436    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:30.013448    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:30.016209    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:30.016222    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:30.016228    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:30.016233    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:30 GMT
	I0805 16:21:30.016238    4640 round_trippers.go:580]     Audit-Id: fb0bd3e0-89c3-4c77-a27d-be315cab22b7
	I0805 16:21:30.016242    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:30.016277    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:30.016283    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:30.016477    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:30.514344    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:30.514386    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:30.514482    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:30.514494    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:30.518828    4640 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 16:21:30.518860    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:30.518870    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:30.518876    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:30.518882    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:30 GMT
	I0805 16:21:30.518888    4640 round_trippers.go:580]     Audit-Id: c1b08932-ee78-4dcb-a190-3a8b24421284
	I0805 16:21:30.518894    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:30.518899    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:30.519002    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:31.012673    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:31.012701    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:31.012712    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:31.012718    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:31.015543    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:31.015560    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:31.015568    4640 round_trippers.go:580]     Audit-Id: b6586a64-ec07-44ee-8a00-1f3b8a00e0bd
	I0805 16:21:31.015572    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:31.015576    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:31.015580    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:31.015583    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:31.015589    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:31 GMT
	I0805 16:21:31.015682    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:31.512531    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:31.512543    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:31.512550    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:31.512554    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:31.514066    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:31.514076    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:31.514081    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:31 GMT
	I0805 16:21:31.514085    4640 round_trippers.go:580]     Audit-Id: 7d410de7-b0d5-4d4e-8455-d31b0df7d302
	I0805 16:21:31.514089    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:31.514093    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:31.514096    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:31.514107    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:31.514758    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:32.014110    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:32.014136    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:32.014147    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:32.014157    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:32.016553    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:32.016570    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:32.016580    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:32.016586    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:32.016592    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:32.016598    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:32.016602    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:32 GMT
	I0805 16:21:32.016605    4640 round_trippers.go:580]     Audit-Id: 67fdb64b-273a-46c2-aac5-c3b115422aa4
	I0805 16:21:32.016861    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:32.017132    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:32.513171    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:32.513188    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:32.513195    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:32.513198    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:32.514908    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:32.514920    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:32.514925    4640 round_trippers.go:580]     Audit-Id: 0f5a2e98-6be6-4963-8897-91c70642048c
	I0805 16:21:32.514928    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:32.514931    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:32.514933    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:32.514936    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:32.514939    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:32 GMT
	I0805 16:21:32.515082    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:33.013769    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:33.013803    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:33.013814    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:33.013822    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:33.016491    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:33.016509    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:33.016519    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:33 GMT
	I0805 16:21:33.016526    4640 round_trippers.go:580]     Audit-Id: 96b5f269-7be9-42a9-9687-cba57d05f76e
	I0805 16:21:33.016532    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:33.016538    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:33.016543    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:33.016548    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:33.016715    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:33.512751    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:33.512772    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:33.512783    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:33.512789    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:33.515431    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:33.515480    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:33.515498    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:33.515506    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:33.515510    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:33 GMT
	I0805 16:21:33.515513    4640 round_trippers.go:580]     Audit-Id: 6cd252a3-d07d-441e-bcf4-bc3bd00c2488
	I0805 16:21:33.515517    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:33.515520    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:33.515747    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:34.013003    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:34.013032    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:34.013043    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:34.013052    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:34.015447    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:34.015465    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:34.015472    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:34.015476    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:34.015479    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:34.015484    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:34.015487    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:34 GMT
	I0805 16:21:34.015492    4640 round_trippers.go:580]     Audit-Id: efcfb0d1-8345-4db5-bce9-e31085842da3
	I0805 16:21:34.015599    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:34.513298    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:34.513317    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:34.513376    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:34.513383    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:34.515051    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:34.515065    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:34.515072    4640 round_trippers.go:580]     Audit-Id: 2a42cb6a-0051-47bd-85f4-9f8ca80afa70
	I0805 16:21:34.515078    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:34.515081    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:34.515087    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:34.515099    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:34.515103    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:34 GMT
	I0805 16:21:34.515359    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:34.515540    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:35.013932    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:35.013957    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:35.013968    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:35.013976    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:35.016505    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:35.016524    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:35.016530    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:35.016537    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:35.016541    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:35.016544    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:35.016555    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:35 GMT
	I0805 16:21:35.016559    4640 round_trippers.go:580]     Audit-Id: 09fa0e04-c026-439e-9cd7-392fd82b16fe
	I0805 16:21:35.016913    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:35.513491    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:35.513514    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:35.513526    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:35.513532    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:35.515995    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:35.516012    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:35.516020    4640 round_trippers.go:580]     Audit-Id: a2b05a8a-9a91-4d20-93d0-b8701ac59b95
	I0805 16:21:35.516024    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:35.516036    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:35.516041    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:35.516055    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:35.516060    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:35 GMT
	I0805 16:21:35.516151    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:36.013521    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:36.013549    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.013561    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.013566    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.016095    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:36.016112    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.016119    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.016131    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.016136    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.016140    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.016144    4640 round_trippers.go:580]     Audit-Id: 77e04f39-a037-4ea2-9716-ad04139089d1
	I0805 16:21:36.016147    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.016230    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"423","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0805 16:21:36.016465    4640 node_ready.go:49] node "multinode-985000" has status "Ready":"True"
	I0805 16:21:36.016481    4640 node_ready.go:38] duration metric: took 15.504115701s for node "multinode-985000" to be "Ready" ...
	I0805 16:21:36.016489    4640 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:21:36.016543    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:36.016551    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.016559    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.016563    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.019046    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:36.019057    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.019065    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.019069    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.019078    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.019081    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.019084    4640 round_trippers.go:580]     Audit-Id: 96048303-6e62-4ba8-a291-bc1ad976756e
	I0805 16:21:36.019091    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.019721    4640 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"427","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56289 chars]
	I0805 16:21:36.021921    4640 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:36.021960    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:21:36.021964    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.021970    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.021974    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.023179    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:36.023187    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.023192    4640 round_trippers.go:580]     Audit-Id: ba42f387-f106-4773-86de-3a22085fd86a
	I0805 16:21:36.023195    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.023198    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.023200    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.023204    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.023208    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.023410    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"427","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0805 16:21:36.023652    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:36.023659    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.023665    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.023671    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.024732    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:36.024744    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.024752    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.024758    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.024765    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.024768    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.024771    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.024775    4640 round_trippers.go:580]     Audit-Id: 2008721c-b230-4e73-b037-d3a843d7c7c8
	I0805 16:21:36.024909    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"423","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0805 16:21:36.523495    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:21:36.523508    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.523514    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.523519    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.525003    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:36.525014    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.525020    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.525042    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.525049    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.525053    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.525060    4640 round_trippers.go:580]     Audit-Id: 1ad5a8dd-64b3-4881-9a8e-e5eaab368c53
	I0805 16:21:36.525066    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.525202    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"427","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0805 16:21:36.525483    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:36.525490    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.525498    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.525502    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.526801    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:36.526810    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.526814    4640 round_trippers.go:580]     Audit-Id: 71c4017f-a267-489e-86ed-59098eae3b88
	I0805 16:21:36.526817    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.526834    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.526840    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.526846    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.526850    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.527025    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"423","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0805 16:21:37.022759    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:21:37.022781    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.022791    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.022799    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.025487    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:37.025503    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.025510    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.025515    4640 round_trippers.go:580]     Audit-Id: 7446d9fd-22ed-4d20-b0f2-e8c4a88b04f4
	I0805 16:21:37.025536    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.025543    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.025547    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.025556    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.025649    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"427","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0805 16:21:37.026010    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.026020    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.026028    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.026033    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.027337    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:37.027346    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.027354    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.027359    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.027363    4640 round_trippers.go:580]     Audit-Id: a309eed4-f088-47f7-8b84-4761b59dbb8c
	I0805 16:21:37.027366    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.027368    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.027371    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.027425    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.522283    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:21:37.522304    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.522315    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.522322    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.524762    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:37.524776    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.524782    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.524788    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.524792    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.524795    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.524799    4640 round_trippers.go:580]     Audit-Id: eaef42a8-7b43-4091-9b70-8d31adc979e5
	I0805 16:21:37.524803    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.525073    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"443","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6576 chars]
	I0805 16:21:37.525438    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.525480    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.525488    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.525492    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.526890    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:37.526903    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.526912    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.526918    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.526927    4640 round_trippers.go:580]     Audit-Id: a3a0e71a-c982-4504-9fae-e76101688c05
	I0805 16:21:37.526931    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.526935    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.526937    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.527034    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.527211    4640 pod_ready.go:92] pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.527220    4640 pod_ready.go:81] duration metric: took 1.505289062s for pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.527230    4640 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.527259    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-985000
	I0805 16:21:37.527264    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.527269    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.527277    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.528379    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:37.528389    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.528394    4640 round_trippers.go:580]     Audit-Id: 3cf4f372-47fb-4b72-9b30-185d93d01537
	I0805 16:21:37.528401    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.528405    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.528408    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.528411    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.528414    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.528618    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-985000","namespace":"kube-system","uid":"8d7ca2d9-8c7b-41b9-a199-de6449107471","resourceVersion":"379","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.mirror":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.seen":"2024-08-05T23:21:06.366030299Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0805 16:21:37.528833    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.528840    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.528845    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.528850    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.529802    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.529808    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.529813    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.529816    4640 round_trippers.go:580]     Audit-Id: 314df0bd-894e-4607-bad0-3348c18fe807
	I0805 16:21:37.529820    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.529823    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.529826    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.529833    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.530046    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.530203    4640 pod_ready.go:92] pod "etcd-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.530210    4640 pod_ready.go:81] duration metric: took 2.974841ms for pod "etcd-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.530218    4640 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.530249    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-985000
	I0805 16:21:37.530253    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.530259    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.530262    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.531449    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:37.531456    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.531461    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.531463    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.531467    4640 round_trippers.go:580]     Audit-Id: 1801a8f0-22d5-44e8-942c-ea521b1ffa66
	I0805 16:21:37.531469    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.531475    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.531477    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.531592    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-985000","namespace":"kube-system","uid":"9be3378a-5fab-4907-baad-507918e714e4","resourceVersion":"369","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"5908531d711118eab279d6b15448dc42","kubernetes.io/config.mirror":"5908531d711118eab279d6b15448dc42","kubernetes.io/config.seen":"2024-08-05T23:21:06.366030949Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7684 chars]
	I0805 16:21:37.531810    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.531820    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.531825    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.531830    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.532663    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.532668    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.532672    4640 round_trippers.go:580]     Audit-Id: 6d0fc4ed-c609-4ee7-a57f-b61eed1bc442
	I0805 16:21:37.532675    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.532679    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.532682    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.532684    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.532688    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.532807    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.532958    4640 pod_ready.go:92] pod "kube-apiserver-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.532967    4640 pod_ready.go:81] duration metric: took 2.743443ms for pod "kube-apiserver-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.532973    4640 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.533000    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-985000
	I0805 16:21:37.533004    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.533009    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.533012    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.533987    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.533995    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.534000    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.534004    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.534020    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.534027    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.534031    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.534034    4640 round_trippers.go:580]     Audit-Id: 97e4dc5c-f4bf-419e-8b15-be800418054c
	I0805 16:21:37.534147    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-985000","namespace":"kube-system","uid":"4ad64361-65de-4b0b-b2a3-07df18c2e603","resourceVersion":"342","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e41fb21b40cd2f3bd83b000891f6569","kubernetes.io/config.mirror":"8e41fb21b40cd2f3bd83b000891f6569","kubernetes.io/config.seen":"2024-08-05T23:21:06.366027130Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7259 chars]
	I0805 16:21:37.534370    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.534377    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.534383    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.534386    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.535293    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.535301    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.535305    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.535308    4640 round_trippers.go:580]     Audit-Id: a4c04a0a-9401-41d1-a0fc-f2a2187abde4
	I0805 16:21:37.535310    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.535313    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.535320    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.535323    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.535432    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.535591    4640 pod_ready.go:92] pod "kube-controller-manager-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.535599    4640 pod_ready.go:81] duration metric: took 2.621545ms for pod "kube-controller-manager-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.535606    4640 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fwgw7" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.535629    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwgw7
	I0805 16:21:37.535634    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.535639    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.535643    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.536550    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.536557    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.536565    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.536570    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.536575    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.536578    4640 round_trippers.go:580]     Audit-Id: 5a688e80-7db3-4070-a1a8-c3419ddb4d44
	I0805 16:21:37.536580    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.536582    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.536704    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fwgw7","generateName":"kube-proxy-","namespace":"kube-system","uid":"3fb72e39-699d-4123-ae5e-e314a191d904","resourceVersion":"409","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8b6258e6-7b31-4600-b32b-4a269867c123","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8b6258e6-7b31-4600-b32b-4a269867c123\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0805 16:21:37.614745    4640 request.go:629] Waited for 77.807971ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.614815    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.614822    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.614839    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.614845    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.616956    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:37.616984    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.616989    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.616993    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.616996    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.616999    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.617002    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.617005    4640 round_trippers.go:580]     Audit-Id: e297627c-4c52-417b-935c-d406bf086c16
	I0805 16:21:37.617232    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.617428    4640 pod_ready.go:92] pod "kube-proxy-fwgw7" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.617437    4640 pod_ready.go:81] duration metric: took 81.82693ms for pod "kube-proxy-fwgw7" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.617444    4640 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.815296    4640 request.go:629] Waited for 197.761592ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-985000
	I0805 16:21:37.815347    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-985000
	I0805 16:21:37.815355    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.815366    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.815376    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.817961    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:37.817976    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.818001    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.818008    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:37.818049    4640 round_trippers.go:580]     Audit-Id: cc44c4e8-8012-4718-aa24-c05fec399a2e
	I0805 16:21:37.818064    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.818078    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.818082    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.818186    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-985000","namespace":"kube-system","uid":"5e23b1b7-e45d-4b43-831c-aa835c5e536d","resourceVersion":"396","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d110ae14602908970c81c0d8a5c21147","kubernetes.io/config.mirror":"d110ae14602908970c81c0d8a5c21147","kubernetes.io/config.seen":"2024-08-05T23:21:06.366029633Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0805 16:21:38.014472    4640 request.go:629] Waited for 195.947535ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:38.014569    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:38.014578    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.014589    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.014597    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.017395    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:38.017406    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.017413    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.017418    4640 round_trippers.go:580]     Audit-Id: 925efcbc-f43b-4431-905e-26927bb76a48
	I0805 16:21:38.017422    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.017428    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.017434    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.017441    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.017905    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:38.018153    4640 pod_ready.go:92] pod "kube-scheduler-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:38.018164    4640 pod_ready.go:81] duration metric: took 400.713995ms for pod "kube-scheduler-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:38.018173    4640 pod_ready.go:38] duration metric: took 2.001673669s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:21:38.018198    4640 api_server.go:52] waiting for apiserver process to appear ...
	I0805 16:21:38.018268    4640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:21:38.030133    4640 command_runner.go:130] > 1977
	I0805 16:21:38.030360    4640 api_server.go:72] duration metric: took 18.07694495s to wait for apiserver process to appear ...
	I0805 16:21:38.030369    4640 api_server.go:88] waiting for apiserver healthz status ...
	I0805 16:21:38.030384    4640 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:21:38.034009    4640 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0805 16:21:38.034048    4640 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I0805 16:21:38.034052    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.034058    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.034063    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.034646    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:38.034653    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.034658    4640 round_trippers.go:580]     Audit-Id: 9f5c9766-330c-4bb5-a5de-4c3a0fdbe474
	I0805 16:21:38.034662    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.034665    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.034668    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.034670    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.034673    4640 round_trippers.go:580]     Content-Length: 263
	I0805 16:21:38.034676    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.034687    4640 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0805 16:21:38.034733    4640 api_server.go:141] control plane version: v1.30.3
	I0805 16:21:38.034742    4640 api_server.go:131] duration metric: took 4.369143ms to wait for apiserver health ...
	I0805 16:21:38.034747    4640 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 16:21:38.213812    4640 request.go:629] Waited for 178.999213ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:38.213950    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:38.213960    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.213970    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.213980    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.217309    4640 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:21:38.217324    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.217331    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.217336    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.217363    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.217372    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.217377    4640 round_trippers.go:580]     Audit-Id: 0f21513f-44e7-4d2f-bacd-2a12fceef757
	I0805 16:21:38.217381    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.217979    4640 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"443","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0805 16:21:38.219249    4640 system_pods.go:59] 8 kube-system pods found
	I0805 16:21:38.219261    4640 system_pods.go:61] "coredns-7db6d8ff4d-fqtll" [4d8af129-475b-4185-8b0d-cbda67812964] Running
	I0805 16:21:38.219265    4640 system_pods.go:61] "etcd-multinode-985000" [8d7ca2d9-8c7b-41b9-a199-de6449107471] Running
	I0805 16:21:38.219268    4640 system_pods.go:61] "kindnet-tvtvg" [7dd4afe7-2a17-4298-823b-9955e43cfdb2] Running
	I0805 16:21:38.219271    4640 system_pods.go:61] "kube-apiserver-multinode-985000" [9be3378a-5fab-4907-baad-507918e714e4] Running
	I0805 16:21:38.219276    4640 system_pods.go:61] "kube-controller-manager-multinode-985000" [4ad64361-65de-4b0b-b2a3-07df18c2e603] Running
	I0805 16:21:38.219278    4640 system_pods.go:61] "kube-proxy-fwgw7" [3fb72e39-699d-4123-ae5e-e314a191d904] Running
	I0805 16:21:38.219280    4640 system_pods.go:61] "kube-scheduler-multinode-985000" [5e23b1b7-e45d-4b43-831c-aa835c5e536d] Running
	I0805 16:21:38.219283    4640 system_pods.go:61] "storage-provisioner" [72ec8458-5c62-43eb-9120-0146e6ccaf8f] Running
	I0805 16:21:38.219286    4640 system_pods.go:74] duration metric: took 184.535842ms to wait for pod list to return data ...
	I0805 16:21:38.219291    4640 default_sa.go:34] waiting for default service account to be created ...
	I0805 16:21:38.413643    4640 request.go:629] Waited for 194.308242ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:21:38.413680    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:21:38.413687    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.413695    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.413699    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.415522    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:38.415531    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.415536    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.415539    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.415543    4640 round_trippers.go:580]     Content-Length: 261
	I0805 16:21:38.415546    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.415548    4640 round_trippers.go:580]     Audit-Id: efc85c0c-9bbc-4cb7-8c14-19ba2f873800
	I0805 16:21:38.415551    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.415553    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.415563    4640 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b0626468-f73b-4e9b-8270-658495d43f4a","resourceVersion":"337","creationTimestamp":"2024-08-05T23:21:19Z"}}]}
	I0805 16:21:38.415681    4640 default_sa.go:45] found service account: "default"
	I0805 16:21:38.415690    4640 default_sa.go:55] duration metric: took 196.394719ms for default service account to be created ...
	I0805 16:21:38.415697    4640 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 16:21:38.613742    4640 request.go:629] Waited for 198.012461ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:38.613858    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:38.613864    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.613870    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.613874    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.616077    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:38.616090    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.616099    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.616106    4640 round_trippers.go:580]     Audit-Id: 3f8a6f23-788b-41c4-8dee-6ff59c02c21d
	I0805 16:21:38.616112    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.616116    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.616126    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.616143    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.616489    4640 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"443","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0805 16:21:38.617747    4640 system_pods.go:86] 8 kube-system pods found
	I0805 16:21:38.617761    4640 system_pods.go:89] "coredns-7db6d8ff4d-fqtll" [4d8af129-475b-4185-8b0d-cbda67812964] Running
	I0805 16:21:38.617766    4640 system_pods.go:89] "etcd-multinode-985000" [8d7ca2d9-8c7b-41b9-a199-de6449107471] Running
	I0805 16:21:38.617770    4640 system_pods.go:89] "kindnet-tvtvg" [7dd4afe7-2a17-4298-823b-9955e43cfdb2] Running
	I0805 16:21:38.617773    4640 system_pods.go:89] "kube-apiserver-multinode-985000" [9be3378a-5fab-4907-baad-507918e714e4] Running
	I0805 16:21:38.617776    4640 system_pods.go:89] "kube-controller-manager-multinode-985000" [4ad64361-65de-4b0b-b2a3-07df18c2e603] Running
	I0805 16:21:38.617780    4640 system_pods.go:89] "kube-proxy-fwgw7" [3fb72e39-699d-4123-ae5e-e314a191d904] Running
	I0805 16:21:38.617784    4640 system_pods.go:89] "kube-scheduler-multinode-985000" [5e23b1b7-e45d-4b43-831c-aa835c5e536d] Running
	I0805 16:21:38.617787    4640 system_pods.go:89] "storage-provisioner" [72ec8458-5c62-43eb-9120-0146e6ccaf8f] Running
	I0805 16:21:38.617792    4640 system_pods.go:126] duration metric: took 202.090644ms to wait for k8s-apps to be running ...
	I0805 16:21:38.617801    4640 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 16:21:38.617848    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:21:38.629448    4640 system_svc.go:56] duration metric: took 11.643357ms WaitForService to wait for kubelet
	I0805 16:21:38.629463    4640 kubeadm.go:582] duration metric: took 18.676048708s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:21:38.629475    4640 node_conditions.go:102] verifying NodePressure condition ...
	I0805 16:21:38.814057    4640 request.go:629] Waited for 184.539621ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I0805 16:21:38.814182    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0805 16:21:38.814193    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.814205    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.814213    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.817076    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:38.817092    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.817099    4640 round_trippers.go:580]     Audit-Id: 83bb2c88-8ae3-45b7-a0f6-9d3f9fead5f2
	I0805 16:21:38.817103    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.817112    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.817116    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.817123    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.817128    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:39 GMT
	I0805 16:21:38.817200    4640 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5011 chars]
	I0805 16:21:38.817474    4640 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:21:38.817490    4640 node_conditions.go:123] node cpu capacity is 2
	I0805 16:21:38.817502    4640 node_conditions.go:105] duration metric: took 188.023135ms to run NodePressure ...
	I0805 16:21:38.817512    4640 start.go:241] waiting for startup goroutines ...
	I0805 16:21:38.817520    4640 start.go:246] waiting for cluster config update ...
	I0805 16:21:38.817530    4640 start.go:255] writing updated cluster config ...
	I0805 16:21:38.838343    4640 out.go:177] 
	I0805 16:21:38.859405    4640 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:21:38.859465    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:21:38.881260    4640 out.go:177] * Starting "multinode-985000-m02" worker node in "multinode-985000" cluster
	I0805 16:21:38.923226    4640 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:21:38.923254    4640 cache.go:56] Caching tarball of preloaded images
	I0805 16:21:38.923425    4640 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:21:38.923439    4640 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:21:38.923503    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:21:38.924257    4640 start.go:360] acquireMachinesLock for multinode-985000-m02: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:21:38.924355    4640 start.go:364] duration metric: took 78.775µs to acquireMachinesLock for "multinode-985000-m02"
	I0805 16:21:38.924379    4640 start.go:93] Provisioning new machine with config: &{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0805 16:21:38.924443    4640 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0805 16:21:38.946258    4640 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 16:21:38.946431    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:38.946482    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:38.956315    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52515
	I0805 16:21:38.956651    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:38.957008    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:38.957028    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:38.957245    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:38.957408    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:21:38.957527    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:38.957642    4640 start.go:159] libmachine.API.Create for "multinode-985000" (driver="hyperkit")
	I0805 16:21:38.957663    4640 client.go:168] LocalClient.Create starting
	I0805 16:21:38.957697    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem
	I0805 16:21:38.957735    4640 main.go:141] libmachine: Decoding PEM data...
	I0805 16:21:38.957747    4640 main.go:141] libmachine: Parsing certificate...
	I0805 16:21:38.957790    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem
	I0805 16:21:38.957819    4640 main.go:141] libmachine: Decoding PEM data...
	I0805 16:21:38.957833    4640 main.go:141] libmachine: Parsing certificate...
	I0805 16:21:38.957849    4640 main.go:141] libmachine: Running pre-create checks...
	I0805 16:21:38.957855    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .PreCreateCheck
	I0805 16:21:38.957933    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:38.957959    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetConfigRaw
	I0805 16:21:38.967700    4640 main.go:141] libmachine: Creating machine...
	I0805 16:21:38.967725    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .Create
	I0805 16:21:38.967957    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:38.968233    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | I0805 16:21:38.967940    4677 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:21:38.968338    4640 main.go:141] libmachine: (multinode-985000-m02) Downloading /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1122/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 16:21:39.171726    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | I0805 16:21:39.171650    4677 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa...
	I0805 16:21:39.251408    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | I0805 16:21:39.251327    4677 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/multinode-985000-m02.rawdisk...
	I0805 16:21:39.251421    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Writing magic tar header
	I0805 16:21:39.251439    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Writing SSH key tar header
	I0805 16:21:39.252021    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | I0805 16:21:39.251983    4677 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02 ...
	I0805 16:21:39.622286    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:39.622309    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid
	I0805 16:21:39.622382    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Using UUID ab5b9c9f-9e28-4bc2-8fcd-b98fce011173
	I0805 16:21:39.647304    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Generated MAC a6:1c:88:9c:44:3
	I0805 16:21:39.647324    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000
	I0805 16:21:39.647363    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0805 16:21:39.647396    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0805 16:21:39.647440    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/multinode-985000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage,/Users/j
enkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"}
	I0805 16:21:39.647475    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ab5b9c9f-9e28-4bc2-8fcd-b98fce011173 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/multinode-985000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/mult
inode-985000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"
	I0805 16:21:39.647493    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:21:39.650407    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: Pid is 4678
	I0805 16:21:39.650823    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 0
	I0805 16:21:39.650838    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:39.650909    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:39.651807    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:39.651870    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:39.651899    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:39.651984    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:39.652006    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:39.652022    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:39.652032    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:39.652039    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:39.652046    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:39.652082    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:39.652100    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:39.652113    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:39.652123    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:39.652143    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:39.657903    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:21:39.666018    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:21:39.666937    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:21:39.666963    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:21:39.666975    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:21:39.666990    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:21:40.050205    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:21:40.050221    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:21:40.165006    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:21:40.165028    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:21:40.165042    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:21:40.165049    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:21:40.165899    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:21:40.165911    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:21:41.653048    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 1
	I0805 16:21:41.653066    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:41.653144    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:41.653911    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:41.653968    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:41.653979    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:41.653992    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:41.653998    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:41.654006    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:41.654015    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:41.654030    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:41.654045    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:41.654053    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:41.654061    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:41.654070    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:41.654078    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:41.654093    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:43.655366    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 2
	I0805 16:21:43.655382    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:43.655471    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:43.656243    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:43.656291    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:43.656301    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:43.656319    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:43.656329    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:43.656351    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:43.656362    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:43.656369    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:43.656375    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:43.656391    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:43.656406    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:43.656416    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:43.656423    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:43.656437    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:45.657345    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 3
	I0805 16:21:45.657361    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:45.657459    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:45.658214    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:45.658269    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:45.658278    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:45.658286    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:45.658295    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:45.658310    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:45.658321    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:45.658329    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:45.658337    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:45.658349    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:45.658362    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:45.658370    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:45.658378    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:45.658387    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:45.751756    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:45 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:21:45.751812    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:45 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:21:45.751830    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:45 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:21:45.774801    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:45 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:21:47.659182    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 4
	I0805 16:21:47.659208    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:47.659291    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:47.660062    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:47.660112    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:47.660128    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:47.660137    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:47.660145    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:47.660153    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:47.660162    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:47.660178    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:47.660192    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:47.660204    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:47.660218    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:47.660230    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:47.660240    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:47.660260    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:49.662115    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 5
	I0805 16:21:49.662148    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:49.662310    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:49.663748    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:49.663812    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I0805 16:21:49.663831    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b00c}
	I0805 16:21:49.663846    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found match: a6:1c:88:9c:44:3
	I0805 16:21:49.663856    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | IP: 192.169.0.14
	I0805 16:21:49.663945    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetConfigRaw
	I0805 16:21:49.664855    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:49.665006    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:49.665127    4640 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 16:21:49.665139    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetState
	I0805 16:21:49.665271    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:49.665344    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:49.666326    4640 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 16:21:49.666337    4640 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 16:21:49.666342    4640 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 16:21:49.666348    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.666471    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:49.666603    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.666743    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.666869    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:49.667045    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:49.667279    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:49.667287    4640 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 16:21:49.724369    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:21:49.724382    4640 main.go:141] libmachine: Detecting the provisioner...
	I0805 16:21:49.724388    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.724522    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:49.724626    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.724719    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.724810    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:49.724938    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:49.725087    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:49.725094    4640 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 16:21:49.782403    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 16:21:49.782454    4640 main.go:141] libmachine: found compatible host: buildroot
	I0805 16:21:49.782460    4640 main.go:141] libmachine: Provisioning with buildroot...
	I0805 16:21:49.782466    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:21:49.782595    4640 buildroot.go:166] provisioning hostname "multinode-985000-m02"
	I0805 16:21:49.782606    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:21:49.782698    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.782797    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:49.782871    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.782964    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.783079    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:49.783204    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:49.783350    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:49.783359    4640 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-985000-m02 && echo "multinode-985000-m02" | sudo tee /etc/hostname
	I0805 16:21:49.854175    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-985000-m02
	
	I0805 16:21:49.854190    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.854319    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:49.854421    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.854492    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.854587    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:49.854712    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:49.854870    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:49.854882    4640 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-985000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-985000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-985000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:21:49.917814    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:21:49.917830    4640 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:21:49.917840    4640 buildroot.go:174] setting up certificates
	I0805 16:21:49.917846    4640 provision.go:84] configureAuth start
	I0805 16:21:49.917856    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:21:49.917985    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:21:49.918095    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.918192    4640 provision.go:143] copyHostCerts
	I0805 16:21:49.918223    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:21:49.918280    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:21:49.918285    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:21:49.918411    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:21:49.918617    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:21:49.918652    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:21:49.918658    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:21:49.918733    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:21:49.918888    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:21:49.918922    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:21:49.918927    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:21:49.918994    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:21:49.919145    4640 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.multinode-985000-m02 san=[127.0.0.1 192.169.0.14 localhost minikube multinode-985000-m02]
	I0805 16:21:50.072896    4640 provision.go:177] copyRemoteCerts
	I0805 16:21:50.072947    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:21:50.072962    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:50.073107    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:50.073199    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.073317    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:50.073426    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:21:50.108446    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:21:50.108519    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:21:50.128617    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:21:50.128684    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0805 16:21:50.148653    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:21:50.148720    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:21:50.168682    4640 provision.go:87] duration metric: took 250.828344ms to configureAuth
	I0805 16:21:50.168695    4640 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:21:50.168835    4640 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:21:50.168849    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:50.168993    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:50.169087    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:50.169175    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.169262    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.169345    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:50.169486    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:50.169621    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:50.169628    4640 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:21:50.228062    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:21:50.228074    4640 buildroot.go:70] root file system type: tmpfs
	I0805 16:21:50.228150    4640 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:21:50.228164    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:50.228293    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:50.228388    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.228480    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.228586    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:50.228755    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:50.228888    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:50.228934    4640 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.13"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:21:50.296901    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.13
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:21:50.296919    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:50.297064    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:50.297158    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.297250    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.297333    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:50.297475    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:50.297611    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:50.297624    4640 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:21:51.873922    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:21:51.873940    4640 main.go:141] libmachine: Checking connection to Docker...
	I0805 16:21:51.873964    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetURL
	I0805 16:21:51.874107    4640 main.go:141] libmachine: Docker is up and running!
	I0805 16:21:51.874115    4640 main.go:141] libmachine: Reticulating splines...
	I0805 16:21:51.874120    4640 client.go:171] duration metric: took 12.916447572s to LocalClient.Create
	I0805 16:21:51.874129    4640 start.go:167] duration metric: took 12.916485141s to libmachine.API.Create "multinode-985000"
	I0805 16:21:51.874135    4640 start.go:293] postStartSetup for "multinode-985000-m02" (driver="hyperkit")
	I0805 16:21:51.874142    4640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:21:51.874152    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:51.874292    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:21:51.874313    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:51.874416    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:51.874505    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:51.874583    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:51.874657    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:21:51.915394    4640 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:21:51.919538    4640 command_runner.go:130] > NAME=Buildroot
	I0805 16:21:51.919549    4640 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0805 16:21:51.919553    4640 command_runner.go:130] > ID=buildroot
	I0805 16:21:51.919557    4640 command_runner.go:130] > VERSION_ID=2023.02.9
	I0805 16:21:51.919560    4640 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0805 16:21:51.919635    4640 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:21:51.919645    4640 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:21:51.919746    4640 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:21:51.919897    4640 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:21:51.919903    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:21:51.920070    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:21:51.929531    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:21:51.959146    4640 start.go:296] duration metric: took 85.003807ms for postStartSetup
	I0805 16:21:51.959174    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetConfigRaw
	I0805 16:21:51.959830    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:21:51.959996    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:21:51.960355    4640 start.go:128] duration metric: took 13.03589336s to createHost
	I0805 16:21:51.960370    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:51.960461    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:51.960532    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:51.960607    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:51.960679    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:51.960792    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:51.960921    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:51.960928    4640 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 16:21:52.018527    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722900112.019707412
	
	I0805 16:21:52.018539    4640 fix.go:216] guest clock: 1722900112.019707412
	I0805 16:21:52.018544    4640 fix.go:229] Guest: 2024-08-05 16:21:52.019707412 -0700 PDT Remote: 2024-08-05 16:21:51.960363 -0700 PDT m=+79.692294773 (delta=59.344412ms)
	I0805 16:21:52.018555    4640 fix.go:200] guest clock delta is within tolerance: 59.344412ms
	I0805 16:21:52.018561    4640 start.go:83] releasing machines lock for "multinode-985000-m02", held for 13.094193048s
	I0805 16:21:52.018577    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:52.018703    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:21:52.040117    4640 out.go:177] * Found network options:
	I0805 16:21:52.084887    4640 out.go:177]   - NO_PROXY=192.169.0.13
	W0805 16:21:52.106885    4640 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:21:52.106945    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:52.107811    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:52.108153    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:52.108320    4640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:21:52.108371    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	W0805 16:21:52.108412    4640 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:21:52.108519    4640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 16:21:52.108545    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:52.108628    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:52.108772    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:52.108842    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:52.108951    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:52.109026    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:52.109176    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:52.109197    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:21:52.109323    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:21:52.141829    4640 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0805 16:21:52.141939    4640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:21:52.141993    4640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:21:52.191903    4640 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0805 16:21:52.192466    4640 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0805 16:21:52.192507    4640 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:21:52.192514    4640 start.go:495] detecting cgroup driver to use...
	I0805 16:21:52.192581    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:21:52.208225    4640 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0805 16:21:52.208528    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:21:52.217078    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:21:52.225489    4640 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:21:52.225534    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:21:52.233992    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:21:52.242465    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:21:52.250835    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:21:52.260065    4640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:21:52.268863    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:21:52.277242    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:21:52.285501    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:21:52.293845    4640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:21:52.301185    4640 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0805 16:21:52.301319    4640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:21:52.308881    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:21:52.403323    4640 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:21:52.423722    4640 start.go:495] detecting cgroup driver to use...
	I0805 16:21:52.423794    4640 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:21:52.442557    4640 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0805 16:21:52.443108    4640 command_runner.go:130] > [Unit]
	I0805 16:21:52.443119    4640 command_runner.go:130] > Description=Docker Application Container Engine
	I0805 16:21:52.443124    4640 command_runner.go:130] > Documentation=https://docs.docker.com
	I0805 16:21:52.443128    4640 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0805 16:21:52.443132    4640 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0805 16:21:52.443136    4640 command_runner.go:130] > StartLimitBurst=3
	I0805 16:21:52.443141    4640 command_runner.go:130] > StartLimitIntervalSec=60
	I0805 16:21:52.443147    4640 command_runner.go:130] > [Service]
	I0805 16:21:52.443151    4640 command_runner.go:130] > Type=notify
	I0805 16:21:52.443155    4640 command_runner.go:130] > Restart=on-failure
	I0805 16:21:52.443160    4640 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13
	I0805 16:21:52.443165    4640 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0805 16:21:52.443175    4640 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0805 16:21:52.443182    4640 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0805 16:21:52.443188    4640 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0805 16:21:52.443194    4640 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0805 16:21:52.443200    4640 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0805 16:21:52.443212    4640 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0805 16:21:52.443224    4640 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0805 16:21:52.443231    4640 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0805 16:21:52.443234    4640 command_runner.go:130] > ExecStart=
	I0805 16:21:52.443246    4640 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0805 16:21:52.443250    4640 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0805 16:21:52.443256    4640 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0805 16:21:52.443262    4640 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0805 16:21:52.443265    4640 command_runner.go:130] > LimitNOFILE=infinity
	I0805 16:21:52.443269    4640 command_runner.go:130] > LimitNPROC=infinity
	I0805 16:21:52.443272    4640 command_runner.go:130] > LimitCORE=infinity
	I0805 16:21:52.443277    4640 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0805 16:21:52.443282    4640 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0805 16:21:52.443285    4640 command_runner.go:130] > TasksMax=infinity
	I0805 16:21:52.443290    4640 command_runner.go:130] > TimeoutStartSec=0
	I0805 16:21:52.443296    4640 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0805 16:21:52.443299    4640 command_runner.go:130] > Delegate=yes
	I0805 16:21:52.443304    4640 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0805 16:21:52.443313    4640 command_runner.go:130] > KillMode=process
	I0805 16:21:52.443317    4640 command_runner.go:130] > [Install]
	I0805 16:21:52.443321    4640 command_runner.go:130] > WantedBy=multi-user.target
	I0805 16:21:52.443454    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:21:52.455112    4640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:21:52.472976    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:21:52.485648    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:21:52.496640    4640 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:21:52.520742    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:21:52.532843    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:21:52.547391    4640 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0805 16:21:52.547619    4640 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:21:52.550475    4640 command_runner.go:130] > /usr/bin/cri-dockerd
	I0805 16:21:52.550551    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:21:52.558821    4640 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:21:52.572801    4640 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:21:52.669948    4640 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:21:52.772017    4640 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:21:52.772038    4640 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:21:52.785587    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:21:52.887001    4640 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:22:53.782764    4640 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0805 16:22:53.782779    4640 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0805 16:22:53.782788    4640 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.895755367s)
	I0805 16:22:53.782849    4640 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0805 16:22:53.791796    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0805 16:22:53.791808    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578059613Z" level=info msg="Starting up"
	I0805 16:22:53.791820    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578746899Z" level=info msg="containerd not running, starting managed containerd"
	I0805 16:22:53.791833    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.579364099Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	I0805 16:22:53.791843    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.597194743Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0805 16:22:53.791853    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613422882Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0805 16:22:53.791865    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613448264Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0805 16:22:53.791875    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613527396Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0805 16:22:53.791884    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613540484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791897    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613598776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.791906    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613664323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791924    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613844698Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.791936    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613881896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791948    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613894727Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.791957    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613902000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791967    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614005875Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791976    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614259691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791991    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615867073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.792000    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615974584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.792024    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616138996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.792033    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616172823Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0805 16:22:53.792042    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616291383Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0805 16:22:53.792050    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616398312Z" level=info msg="metadata content store policy set" policy=shared
	I0805 16:22:53.792059    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.618998610Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0805 16:22:53.792068    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619065338Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0805 16:22:53.792076    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619081703Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0805 16:22:53.792085    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619092273Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0805 16:22:53.792094    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619101426Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0805 16:22:53.792103    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619164798Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0805 16:22:53.792113    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619370752Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0805 16:22:53.792121    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619460644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0805 16:22:53.792129    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619495461Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0805 16:22:53.792138    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619506581Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0805 16:22:53.792148    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619515758Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792158    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619524383Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792170    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619532546Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792178    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619541391Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792187    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619550990Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792197    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619565508Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792266    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619576616Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792278    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619584035Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792291    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619598072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792299    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619608190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792307    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619616319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792316    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619625389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792326    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619634123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792335    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619648148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792344    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619658942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792353    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619667668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792362    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619676302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792371    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619686416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792380    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619694011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792388    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619701566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792397    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619709342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792406    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619719250Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0805 16:22:53.792415    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619733203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792423    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619741785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792432    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619749153Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0805 16:22:53.792442    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619797467Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0805 16:22:53.792454    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619811479Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0805 16:22:53.792467    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619819137Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0805 16:22:53.792661    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619826861Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0805 16:22:53.792673    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619833500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792682    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619841896Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0805 16:22:53.792690    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619852419Z" level=info msg="NRI interface is disabled by configuration."
	I0805 16:22:53.792702    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620071162Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0805 16:22:53.792710    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620124755Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0805 16:22:53.792718    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620155079Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0805 16:22:53.792725    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620168148Z" level=info msg="containerd successfully booted in 0.023750s"
	I0805 16:22:53.792734    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.639692405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0805 16:22:53.792741    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.644102102Z" level=info msg="Loading containers: start."
	I0805 16:22:53.792763    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.740540264Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0805 16:22:53.792774    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.826229634Z" level=info msg="Loading containers: done."
	I0805 16:22:53.792783    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843276878Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	I0805 16:22:53.792792    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843375843Z" level=info msg="Daemon has completed initialization"
	I0805 16:22:53.792800    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869275976Z" level=info msg="API listen on /var/run/docker.sock"
	I0805 16:22:53.792807    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869434474Z" level=info msg="API listen on [::]:2376"
	I0805 16:22:53.792813    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 systemd[1]: Started Docker Application Container Engine.
	I0805 16:22:53.792821    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.919662359Z" level=info msg="Processing signal 'terminated'"
	I0805 16:22:53.792829    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920773928Z" level=info msg="Daemon shutdown complete"
	I0805 16:22:53.792840    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920792538Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0805 16:22:53.792852    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920845272Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0805 16:22:53.792861    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920858866Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0805 16:22:53.792868    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0805 16:22:53.792874    4640 command_runner.go:130] > Aug 05 23:21:53 multinode-985000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0805 16:22:53.792904    4640 command_runner.go:130] > Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0805 16:22:53.792911    4640 command_runner.go:130] > Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0805 16:22:53.792918    4640 command_runner.go:130] > Aug 05 23:21:53 multinode-985000-m02 dockerd[923]: time="2024-08-05T23:21:53.957339969Z" level=info msg="Starting up"
	I0805 16:22:53.792929    4640 command_runner.go:130] > Aug 05 23:22:53 multinode-985000-m02 dockerd[923]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0805 16:22:53.792940    4640 command_runner.go:130] > Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0805 16:22:53.792946    4640 command_runner.go:130] > Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0805 16:22:53.792952    4640 command_runner.go:130] > Aug 05 23:22:53 multinode-985000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0805 16:22:53.817223    4640 out.go:177] 
	W0805 16:22:53.838182    4640 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 05 23:21:50 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578059613Z" level=info msg="Starting up"
	Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578746899Z" level=info msg="containerd not running, starting managed containerd"
	Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.579364099Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.597194743Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613422882Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613448264Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613527396Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613540484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613598776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613664323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613844698Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613881896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613894727Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613902000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614005875Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614259691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615867073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615974584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616138996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616172823Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616291383Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616398312Z" level=info msg="metadata content store policy set" policy=shared
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.618998610Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619065338Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619081703Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619092273Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619101426Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619164798Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619370752Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619460644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619495461Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619506581Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619515758Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619524383Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619532546Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619541391Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619550990Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619565508Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619576616Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619584035Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619598072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619608190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619616319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619625389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619634123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619648148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619658942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619667668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619676302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619686416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619694011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619701566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619709342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619719250Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619733203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619741785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619749153Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619797467Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619811479Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619819137Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619826861Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619833500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619841896Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619852419Z" level=info msg="NRI interface is disabled by configuration."
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620071162Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620124755Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620155079Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620168148Z" level=info msg="containerd successfully booted in 0.023750s"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.639692405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.644102102Z" level=info msg="Loading containers: start."
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.740540264Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.826229634Z" level=info msg="Loading containers: done."
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843276878Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843375843Z" level=info msg="Daemon has completed initialization"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869275976Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869434474Z" level=info msg="API listen on [::]:2376"
	Aug 05 23:21:51 multinode-985000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.919662359Z" level=info msg="Processing signal 'terminated'"
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920773928Z" level=info msg="Daemon shutdown complete"
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920792538Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920845272Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920858866Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 05 23:21:52 multinode-985000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 05 23:21:53 multinode-985000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:21:53 multinode-985000-m02 dockerd[923]: time="2024-08-05T23:21:53.957339969Z" level=info msg="Starting up"
	Aug 05 23:22:53 multinode-985000-m02 dockerd[923]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 05 23:22:53 multinode-985000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0805 16:22:53.838301    4640 out.go:239] * 
	W0805 16:22:53.839537    4640 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:22:53.901092    4640 out.go:177] 
	
	
	==> Docker <==
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.538240622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.545949341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.546006859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.546094356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.546213245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 cri-dockerd[1167]: time="2024-08-05T23:21:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2a8cd74365e92f179bb6ee1ce28c9364c192d2bf64c54e8b18c5339cfbdf5dcd/resolv.conf as [nameserver 192.169.0.1]"
	Aug 05 23:21:36 multinode-985000 cri-dockerd[1167]: time="2024-08-05T23:21:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/35b9ac42edc06af57c697463456d60a00f8d9d12849ef967af1e639bc238e3b3/resolv.conf as [nameserver 192.169.0.1]"
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.715025205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.715620680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.716022138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.717088853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.755323726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.755409641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.755418837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.764703174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:22:57 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:57.493861515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:22:57 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:57.493963422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:22:57 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:57.494329548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:22:57 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:57.494770138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:22:57 multinode-985000 cri-dockerd[1167]: time="2024-08-05T23:22:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/abfb33d4f204dd0b2a7ffc533336cce5539144674b64125ac7373b0be8961559/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 05 23:22:58 multinode-985000 cri-dockerd[1167]: time="2024-08-05T23:22:58Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Aug 05 23:22:58 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:58.841390849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:22:58 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:58.841491056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:22:58 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:58.841532145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:22:58 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:58.841640743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0cbc162071e51       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   abfb33d4f204d       busybox-fc5497c4f-44k5g
	c9365aec33892       cbb01a7bd410d                                                                                         13 minutes ago      Running             coredns                   0                   35b9ac42edc06       coredns-7db6d8ff4d-fqtll
	3d9fd612d0b14       6e38f40d628db                                                                                         13 minutes ago      Running             storage-provisioner       0                   2a8cd74365e92       storage-provisioner
	724e5cfab0a27       kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3              14 minutes ago      Running             kindnet-cni               0                   65a1122097f07       kindnet-tvtvg
	d58ca48f9f8b2       55bb025d2cfa5                                                                                         14 minutes ago      Running             kube-proxy                0                   c91338eb0e138       kube-proxy-fwgw7
	792feba1a6f6b       3edc18e7b7672                                                                                         14 minutes ago      Running             kube-scheduler            0                   c86e04eb7823b       kube-scheduler-multinode-985000
	1fdd85b796ab3       3861cfcd7c04c                                                                                         14 minutes ago      Running             etcd                      0                   b58900db52990       etcd-multinode-985000
	d11865076c645       76932a3b37d7e                                                                                         14 minutes ago      Running             kube-controller-manager   0                   55a20063845e3       kube-controller-manager-multinode-985000
	608878b33f358       1f6d574d502f3                                                                                         14 minutes ago      Running             kube-apiserver            0                   569788c2699f1       kube-apiserver-multinode-985000
	
	
	==> coredns [c9365aec3389] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57821 - 19682 "HINFO IN 7732396596932693360.4385804994640298901. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.014623104s
	[INFO] 10.244.0.3:44234 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136193s
	[INFO] 10.244.0.3:37423 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.058799401s
	[INFO] 10.244.0.3:57961 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.010090318s
	[INFO] 10.244.0.3:37799 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.012765436s
	[INFO] 10.244.0.3:46499 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000078364s
	[INFO] 10.244.0.3:42436 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011216992s
	[INFO] 10.244.0.3:35880 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000144767s
	[INFO] 10.244.0.3:39224 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104006s
	[INFO] 10.244.0.3:48536 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.013324615s
	[INFO] 10.244.0.3:55841 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000221823s
	[INFO] 10.244.0.3:46712 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000111417s
	[INFO] 10.244.0.3:51982 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099744s
	[INFO] 10.244.0.3:55425 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080184s
	[INFO] 10.244.0.3:58084 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119904s
	[INFO] 10.244.0.3:57892 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049065s
	[INFO] 10.244.0.3:52329 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000049128s
	[INFO] 10.244.0.3:60384 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000083319s
	[INFO] 10.244.0.3:51923 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000058598s
	[INFO] 10.244.0.3:37985 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00007256s
	[INFO] 10.244.0.3:45792 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000071025s
	
	
	==> describe nodes <==
	Name:               multinode-985000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-985000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=multinode-985000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T16_21_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:21:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-985000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:35:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:33:23 +0000   Mon, 05 Aug 2024 23:21:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:33:23 +0000   Mon, 05 Aug 2024 23:21:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:33:23 +0000   Mon, 05 Aug 2024 23:21:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:33:23 +0000   Mon, 05 Aug 2024 23:21:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.13
	  Hostname:    multinode-985000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 43d0d80c8ac846e58ac4351481e2a76f
	  System UUID:                3ac6443b-0000-0000-898d-9b152fa64288
	  Boot ID:                    382df761-aca3-4a9d-bdce-655bf0444398
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-44k5g                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-7db6d8ff4d-fqtll                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-multinode-985000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-tvtvg                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-multinode-985000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-985000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-fwgw7                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-multinode-985000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node multinode-985000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node multinode-985000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node multinode-985000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node multinode-985000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node multinode-985000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node multinode-985000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node multinode-985000 event: Registered Node multinode-985000 in Controller
	  Normal  NodeReady                13m                kubelet          Node multinode-985000 status is now: NodeReady
	
	
	Name:               multinode-985000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-985000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=multinode-985000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T16_34_49_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:34:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-985000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:35:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:35:19 +0000   Mon, 05 Aug 2024 23:34:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:35:19 +0000   Mon, 05 Aug 2024 23:34:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:35:19 +0000   Mon, 05 Aug 2024 23:34:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:35:19 +0000   Mon, 05 Aug 2024 23:35:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.15
	  Hostname:    multinode-985000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 826016b56497466499a1ccf530c0b20a
	  System UUID:                f79c425f-0000-0000-b959-1b18fd31916b
	  Boot ID:                    e2b098c4-c586-45f3-bd88-3d2d31770824
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-ptd5b    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-5kfjr              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      40s
	  kube-system                 kube-proxy-s65dd           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 34s                kube-proxy       
	  Normal  NodeHasSufficientMemory  41s (x2 over 41s)  kubelet          Node multinode-985000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s (x2 over 41s)  kubelet          Node multinode-985000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s (x2 over 41s)  kubelet          Node multinode-985000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  41s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           40s                node-controller  Node multinode-985000-m03 event: Registered Node multinode-985000-m03 in Controller
	  Normal  NodeReady                18s                kubelet          Node multinode-985000-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.261909] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.788416] systemd-fstab-generator[490]: Ignoring "noauto" option for root device
	[  +0.099076] systemd-fstab-generator[502]: Ignoring "noauto" option for root device
	[  +1.730104] systemd-fstab-generator[841]: Ignoring "noauto" option for root device
	[  +0.293514] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.050985] kauditd_printk_skb: 95 callbacks suppressed
	[  +0.056812] systemd-fstab-generator[892]: Ignoring "noauto" option for root device
	[  +0.126132] systemd-fstab-generator[906]: Ignoring "noauto" option for root device
	[  +2.458612] systemd-fstab-generator[1120]: Ignoring "noauto" option for root device
	[  +0.104830] systemd-fstab-generator[1132]: Ignoring "noauto" option for root device
	[  +0.110549] systemd-fstab-generator[1144]: Ignoring "noauto" option for root device
	[  +0.128910] systemd-fstab-generator[1159]: Ignoring "noauto" option for root device
	[  +3.841948] systemd-fstab-generator[1259]: Ignoring "noauto" option for root device
	[  +0.049995] kauditd_printk_skb: 180 callbacks suppressed
	[  +2.575866] systemd-fstab-generator[1508]: Ignoring "noauto" option for root device
	[  +3.513702] systemd-fstab-generator[1689]: Ignoring "noauto" option for root device
	[  +0.052965] kauditd_printk_skb: 70 callbacks suppressed
	[Aug 5 23:21] systemd-fstab-generator[2095]: Ignoring "noauto" option for root device
	[  +0.093506] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.997559] systemd-fstab-generator[2287]: Ignoring "noauto" option for root device
	[  +0.103967] kauditd_printk_skb: 12 callbacks suppressed
	[ +16.210215] kauditd_printk_skb: 60 callbacks suppressed
	[Aug 5 23:22] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [1fdd85b796ab] <==
	{"level":"info","ts":"2024-08-05T23:21:02.190598Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T23:21:02.190621Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T23:21:02.179152Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 switched to configuration voters=(16152458731666035825)"}
	{"level":"info","ts":"2024-08-05T23:21:02.190761Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","added-peer-id":"e0290fa3161c5471","added-peer-peer-urls":["https://192.169.0.13:2380"]}
	{"level":"info","ts":"2024-08-05T23:21:02.845352Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-05T23:21:02.84543Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-05T23:21:02.845462Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgPreVoteResp from e0290fa3161c5471 at term 1"}
	{"level":"info","ts":"2024-08-05T23:21:02.845512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became candidate at term 2"}
	{"level":"info","ts":"2024-08-05T23:21:02.845532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgVoteResp from e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-08-05T23:21:02.845548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became leader at term 2"}
	{"level":"info","ts":"2024-08-05T23:21:02.845562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e0290fa3161c5471 elected leader e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-08-05T23:21:02.849595Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.851787Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e0290fa3161c5471","local-member-attributes":"{Name:multinode-985000 ClientURLs:[https://192.169.0.13:2379]}","request-path":"/0/members/e0290fa3161c5471/attributes","cluster-id":"87b46e718846f146","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T23:21:02.852037Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:21:02.855611Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	{"level":"info","ts":"2024-08-05T23:21:02.856003Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.856059Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.85615Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.863221Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:21:02.86336Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T23:21:02.863406Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T23:21:02.864495Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-05T23:31:02.914901Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":684}
	{"level":"info","ts":"2024-08-05T23:31:02.918154Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":684,"took":"2.558785ms","hash":2682644219,"current-db-size-bytes":2088960,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2088960,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-08-05T23:31:02.918199Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2682644219,"revision":684,"compact-revision":-1}
	
	
	==> kernel <==
	 23:35:30 up 14 min,  0 users,  load average: 0.43, 0.18, 0.11
	Linux multinode-985000 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [724e5cfab0a2] <==
	I0805 23:34:14.989462       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:34:14.989592       1 main.go:299] handling current node
	I0805 23:34:24.989135       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:34:24.989269       1 main.go:299] handling current node
	I0805 23:34:34.997631       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:34:34.997789       1 main.go:299] handling current node
	I0805 23:34:44.997368       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:34:44.997416       1 main.go:299] handling current node
	I0805 23:34:54.992568       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:34:54.992629       1 main.go:299] handling current node
	I0805 23:34:54.992643       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:34:54.992648       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.1.0/24] 
	I0805 23:34:54.992876       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.169.0.15 Flags: [] Table: 0} 
	I0805 23:35:04.990312       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:35:04.990398       1 main.go:299] handling current node
	I0805 23:35:04.990506       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:35:04.990544       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.1.0/24] 
	I0805 23:35:14.988650       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:35:14.988669       1 main.go:299] handling current node
	I0805 23:35:14.988679       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:35:14.988682       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.1.0/24] 
	I0805 23:35:24.989729       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:35:24.989803       1 main.go:299] handling current node
	I0805 23:35:24.989824       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:35:24.989837       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [608878b33f35] <==
	I0805 23:21:04.097032       1 aggregator.go:165] initial CRD sync complete...
	I0805 23:21:04.097038       1 autoregister_controller.go:141] Starting autoregister controller
	I0805 23:21:04.097041       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0805 23:21:04.097046       1 cache.go:39] Caches are synced for autoregister controller
	I0805 23:21:04.110976       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0805 23:21:04.964782       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0805 23:21:04.969492       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0805 23:21:04.969592       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0805 23:21:05.293407       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0805 23:21:05.318630       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0805 23:21:05.372930       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0805 23:21:05.377089       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.13]
	I0805 23:21:05.377814       1 controller.go:615] quota admission added evaluator for: endpoints
	I0805 23:21:05.381896       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0805 23:21:06.014220       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0805 23:21:06.529594       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0805 23:21:06.534785       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0805 23:21:06.541889       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0805 23:21:20.069451       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0805 23:21:20.168118       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0805 23:34:22.712021       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52583: use of closed network connection
	E0805 23:34:23.040370       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52588: use of closed network connection
	E0805 23:34:23.352264       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52593: use of closed network connection
	E0805 23:34:26.444399       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52624: use of closed network connection
	E0805 23:34:26.631411       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52626: use of closed network connection
	
	
	==> kube-controller-manager [d11865076c64] <==
	I0805 23:21:20.453666       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.448745ms"
	I0805 23:21:20.454853       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="1.144243ms"
	I0805 23:21:20.787054       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="47.481389ms"
	I0805 23:21:20.817469       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.368774ms"
	I0805 23:21:20.817550       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="43.975µs"
	I0805 23:21:35.878200       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="31.077µs"
	I0805 23:21:35.888778       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.967µs"
	I0805 23:21:37.680305       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="64.353µs"
	I0805 23:21:37.699191       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="7.51419ms"
	I0805 23:21:37.699276       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.856µs"
	I0805 23:21:39.419986       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0805 23:22:57.139604       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="58.652844ms"
	I0805 23:22:57.152479       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.645403ms"
	I0805 23:22:57.161837       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="9.312944ms"
	I0805 23:22:57.161913       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="32.986µs"
	I0805 23:22:59.131878       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.268042ms"
	I0805 23:22:59.132399       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.529µs"
	I0805 23:34:49.118620       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-985000-m03\" does not exist"
	I0805 23:34:49.123685       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-985000-m03" podCIDRs=["10.244.1.0/24"]
	I0805 23:34:49.553799       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-985000-m03"
	I0805 23:35:12.244278       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-985000-m03"
	I0805 23:35:12.252224       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.969µs"
	I0805 23:35:12.259725       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.754µs"
	I0805 23:35:14.267796       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.716009ms"
	I0805 23:35:14.267862       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.069µs"
	
	
	==> kube-proxy [d58ca48f9f8b] <==
	I0805 23:21:21.029929       1 server_linux.go:69] "Using iptables proxy"
	I0805 23:21:21.072929       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.13"]
	I0805 23:21:21.105532       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 23:21:21.105552       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 23:21:21.105563       1 server_linux.go:165] "Using iptables Proxier"
	I0805 23:21:21.107493       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 23:21:21.107594       1 server.go:872] "Version info" version="v1.30.3"
	I0805 23:21:21.107602       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:21:21.108477       1 config.go:192] "Starting service config controller"
	I0805 23:21:21.108482       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 23:21:21.108492       1 config.go:101] "Starting endpoint slice config controller"
	I0805 23:21:21.108494       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 23:21:21.108784       1 config.go:319] "Starting node config controller"
	I0805 23:21:21.108789       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 23:21:21.209420       1 shared_informer.go:320] Caches are synced for node config
	I0805 23:21:21.209474       1 shared_informer.go:320] Caches are synced for service config
	I0805 23:21:21.209501       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [792feba1a6f6] <==
	E0805 23:21:04.024310       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:04.024229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0805 23:21:04.024017       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 23:21:04.024329       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 23:21:04.024047       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 23:21:04.024362       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0805 23:21:04.024118       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 23:21:04.024431       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0805 23:21:04.860871       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:04.861069       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0805 23:21:04.959895       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0805 23:21:04.959949       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0805 23:21:04.962444       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 23:21:04.962496       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0805 23:21:04.968410       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 23:21:04.968452       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 23:21:05.030527       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0805 23:21:05.030566       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 23:21:05.076451       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:05.076659       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0805 23:21:05.118159       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:05.118676       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0805 23:21:05.141945       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 23:21:05.142020       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0805 23:21:08.218627       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 05 23:31:06 multinode-985000 kubelet[2102]: E0805 23:31:06.388949    2102 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:31:06 multinode-985000 kubelet[2102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:31:06 multinode-985000 kubelet[2102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:31:06 multinode-985000 kubelet[2102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:31:06 multinode-985000 kubelet[2102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:32:06 multinode-985000 kubelet[2102]: E0805 23:32:06.388091    2102 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:32:06 multinode-985000 kubelet[2102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:32:06 multinode-985000 kubelet[2102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:32:06 multinode-985000 kubelet[2102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:32:06 multinode-985000 kubelet[2102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:33:06 multinode-985000 kubelet[2102]: E0805 23:33:06.388876    2102 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:33:06 multinode-985000 kubelet[2102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:33:06 multinode-985000 kubelet[2102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:33:06 multinode-985000 kubelet[2102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:33:06 multinode-985000 kubelet[2102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:34:06 multinode-985000 kubelet[2102]: E0805 23:34:06.388016    2102 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:34:06 multinode-985000 kubelet[2102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:34:06 multinode-985000 kubelet[2102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:34:06 multinode-985000 kubelet[2102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:34:06 multinode-985000 kubelet[2102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:35:06 multinode-985000 kubelet[2102]: E0805 23:35:06.389737    2102 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:35:06 multinode-985000 kubelet[2102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:35:06 multinode-985000 kubelet[2102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:35:06 multinode-985000 kubelet[2102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:35:06 multinode-985000 kubelet[2102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-985000 -n multinode-985000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-985000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopNode (11.42s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (83.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-985000 node start m03 -v=7 --alsologtostderr: (40.957252997s)
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-985000 status -v=7 --alsologtostderr: exit status 2 (307.143142ms)

                                                
                                                
-- stdout --
	multinode-985000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-985000-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-985000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:36:12.170777    5398 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:36:12.171558    5398 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:36:12.171567    5398 out.go:304] Setting ErrFile to fd 2...
	I0805 16:36:12.171573    5398 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:36:12.172040    5398 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:36:12.172240    5398 out.go:298] Setting JSON to false
	I0805 16:36:12.172264    5398 mustload.go:65] Loading cluster: multinode-985000
	I0805 16:36:12.172302    5398 notify.go:220] Checking for updates...
	I0805 16:36:12.172576    5398 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:36:12.172592    5398 status.go:255] checking status of multinode-985000 ...
	I0805 16:36:12.172944    5398 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:12.172983    5398 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:12.182073    5398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52898
	I0805 16:36:12.182432    5398 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:12.182863    5398 main.go:141] libmachine: Using API Version  1
	I0805 16:36:12.182873    5398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:12.183097    5398 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:12.183207    5398 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:36:12.183281    5398 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:36:12.183352    5398 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:36:12.184288    5398 status.go:330] multinode-985000 host status = "Running" (err=<nil>)
	I0805 16:36:12.184306    5398 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:36:12.184559    5398 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:12.184580    5398 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:12.192996    5398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52900
	I0805 16:36:12.193341    5398 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:12.193700    5398 main.go:141] libmachine: Using API Version  1
	I0805 16:36:12.193719    5398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:12.193954    5398 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:12.194069    5398 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:36:12.194142    5398 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:36:12.194417    5398 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:12.194439    5398 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:12.202872    5398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52902
	I0805 16:36:12.203167    5398 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:12.203452    5398 main.go:141] libmachine: Using API Version  1
	I0805 16:36:12.203468    5398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:12.203658    5398 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:12.203761    5398 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:36:12.203897    5398 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:36:12.203918    5398 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:36:12.203996    5398 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:36:12.204076    5398 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:36:12.204150    5398 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:36:12.204235    5398 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:36:12.234348    5398 ssh_runner.go:195] Run: systemctl --version
	I0805 16:36:12.238695    5398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:36:12.251197    5398 kubeconfig.go:125] found "multinode-985000" server: "https://192.169.0.13:8443"
	I0805 16:36:12.251224    5398 api_server.go:166] Checking apiserver status ...
	I0805 16:36:12.251265    5398 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:36:12.264044    5398 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1977/cgroup
	W0805 16:36:12.272080    5398 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1977/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 16:36:12.272140    5398 ssh_runner.go:195] Run: ls
	I0805 16:36:12.275386    5398 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:36:12.278326    5398 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0805 16:36:12.278336    5398 status.go:422] multinode-985000 apiserver status = Running (err=<nil>)
	I0805 16:36:12.278352    5398 status.go:257] multinode-985000 status: &{Name:multinode-985000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:36:12.278363    5398 status.go:255] checking status of multinode-985000-m02 ...
	I0805 16:36:12.278617    5398 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:12.278637    5398 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:12.287139    5398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52906
	I0805 16:36:12.287461    5398 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:12.287796    5398 main.go:141] libmachine: Using API Version  1
	I0805 16:36:12.287809    5398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:12.288032    5398 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:12.288161    5398 main.go:141] libmachine: (multinode-985000-m02) Calling .GetState
	I0805 16:36:12.288244    5398 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:36:12.288307    5398 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:36:12.289261    5398 status.go:330] multinode-985000-m02 host status = "Running" (err=<nil>)
	I0805 16:36:12.289269    5398 host.go:66] Checking if "multinode-985000-m02" exists ...
	I0805 16:36:12.289512    5398 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:12.289537    5398 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:12.298048    5398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52908
	I0805 16:36:12.298379    5398 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:12.298695    5398 main.go:141] libmachine: Using API Version  1
	I0805 16:36:12.298704    5398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:12.298931    5398 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:12.299058    5398 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:36:12.299135    5398 host.go:66] Checking if "multinode-985000-m02" exists ...
	I0805 16:36:12.299397    5398 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:12.299421    5398 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:12.307728    5398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52910
	I0805 16:36:12.308100    5398 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:12.308472    5398 main.go:141] libmachine: Using API Version  1
	I0805 16:36:12.308485    5398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:12.308685    5398 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:12.308799    5398 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:36:12.308918    5398 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:36:12.308929    5398 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:36:12.309045    5398 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:36:12.309124    5398 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:36:12.309217    5398 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:36:12.309315    5398 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:36:12.342247    5398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:36:12.352974    5398 status.go:257] multinode-985000-m02 status: &{Name:multinode-985000-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:36:12.352990    5398 status.go:255] checking status of multinode-985000-m03 ...
	I0805 16:36:12.353269    5398 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:12.353290    5398 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:12.361940    5398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52913
	I0805 16:36:12.362333    5398 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:12.362642    5398 main.go:141] libmachine: Using API Version  1
	I0805 16:36:12.362651    5398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:12.362850    5398 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:12.362950    5398 main.go:141] libmachine: (multinode-985000-m03) Calling .GetState
	I0805 16:36:12.363036    5398 main.go:141] libmachine: (multinode-985000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:36:12.363096    5398 main.go:141] libmachine: (multinode-985000-m03) DBG | hyperkit pid from json: 5380
	I0805 16:36:12.364052    5398 status.go:330] multinode-985000-m03 host status = "Running" (err=<nil>)
	I0805 16:36:12.364060    5398 host.go:66] Checking if "multinode-985000-m03" exists ...
	I0805 16:36:12.364298    5398 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:12.364321    5398 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:12.372698    5398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52915
	I0805 16:36:12.373004    5398 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:12.373305    5398 main.go:141] libmachine: Using API Version  1
	I0805 16:36:12.373313    5398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:12.373539    5398 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:12.373648    5398 main.go:141] libmachine: (multinode-985000-m03) Calling .GetIP
	I0805 16:36:12.373737    5398 host.go:66] Checking if "multinode-985000-m03" exists ...
	I0805 16:36:12.373995    5398 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:12.374017    5398 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:12.382430    5398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52917
	I0805 16:36:12.382749    5398 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:12.383067    5398 main.go:141] libmachine: Using API Version  1
	I0805 16:36:12.383076    5398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:12.383261    5398 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:12.383372    5398 main.go:141] libmachine: (multinode-985000-m03) Calling .DriverName
	I0805 16:36:12.383494    5398 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:36:12.383506    5398 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHHostname
	I0805 16:36:12.383584    5398 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHPort
	I0805 16:36:12.383666    5398 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHKeyPath
	I0805 16:36:12.383739    5398 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHUsername
	I0805 16:36:12.383814    5398 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m03/id_rsa Username:docker}
	I0805 16:36:12.412735    5398 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:36:12.423153    5398 status.go:257] multinode-985000-m03 status: &{Name:multinode-985000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-985000 status -v=7 --alsologtostderr: exit status 2 (306.713322ms)

                                                
                                                
-- stdout --
	multinode-985000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-985000-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-985000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:36:13.371734    5409 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:36:13.372497    5409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:36:13.372505    5409 out.go:304] Setting ErrFile to fd 2...
	I0805 16:36:13.372511    5409 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:36:13.373000    5409 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:36:13.373199    5409 out.go:298] Setting JSON to false
	I0805 16:36:13.373221    5409 mustload.go:65] Loading cluster: multinode-985000
	I0805 16:36:13.373263    5409 notify.go:220] Checking for updates...
	I0805 16:36:13.373500    5409 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:36:13.373517    5409 status.go:255] checking status of multinode-985000 ...
	I0805 16:36:13.373870    5409 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:13.373923    5409 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:13.382950    5409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52921
	I0805 16:36:13.383319    5409 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:13.383717    5409 main.go:141] libmachine: Using API Version  1
	I0805 16:36:13.383725    5409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:13.383912    5409 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:13.384019    5409 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:36:13.384097    5409 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:36:13.384166    5409 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:36:13.385134    5409 status.go:330] multinode-985000 host status = "Running" (err=<nil>)
	I0805 16:36:13.385156    5409 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:36:13.385387    5409 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:13.385406    5409 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:13.393558    5409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52923
	I0805 16:36:13.393888    5409 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:13.394210    5409 main.go:141] libmachine: Using API Version  1
	I0805 16:36:13.394220    5409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:13.394406    5409 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:13.394521    5409 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:36:13.394607    5409 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:36:13.394854    5409 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:13.394895    5409 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:13.403380    5409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52925
	I0805 16:36:13.403767    5409 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:13.404071    5409 main.go:141] libmachine: Using API Version  1
	I0805 16:36:13.404084    5409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:13.404279    5409 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:13.404379    5409 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:36:13.404516    5409 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:36:13.404538    5409 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:36:13.404608    5409 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:36:13.404737    5409 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:36:13.404847    5409 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:36:13.404929    5409 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:36:13.434606    5409 ssh_runner.go:195] Run: systemctl --version
	I0805 16:36:13.439015    5409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:36:13.450507    5409 kubeconfig.go:125] found "multinode-985000" server: "https://192.169.0.13:8443"
	I0805 16:36:13.450533    5409 api_server.go:166] Checking apiserver status ...
	I0805 16:36:13.450571    5409 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:36:13.462117    5409 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1977/cgroup
	W0805 16:36:13.471161    5409 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1977/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 16:36:13.471220    5409 ssh_runner.go:195] Run: ls
	I0805 16:36:13.474492    5409 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:36:13.477504    5409 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0805 16:36:13.477515    5409 status.go:422] multinode-985000 apiserver status = Running (err=<nil>)
	I0805 16:36:13.477524    5409 status.go:257] multinode-985000 status: &{Name:multinode-985000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:36:13.477536    5409 status.go:255] checking status of multinode-985000-m02 ...
	I0805 16:36:13.477787    5409 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:13.477813    5409 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:13.486309    5409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52929
	I0805 16:36:13.486629    5409 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:13.487001    5409 main.go:141] libmachine: Using API Version  1
	I0805 16:36:13.487017    5409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:13.487234    5409 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:13.487349    5409 main.go:141] libmachine: (multinode-985000-m02) Calling .GetState
	I0805 16:36:13.487426    5409 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:36:13.487499    5409 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:36:13.488482    5409 status.go:330] multinode-985000-m02 host status = "Running" (err=<nil>)
	I0805 16:36:13.488490    5409 host.go:66] Checking if "multinode-985000-m02" exists ...
	I0805 16:36:13.488737    5409 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:13.488763    5409 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:13.497102    5409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52931
	I0805 16:36:13.497436    5409 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:13.497759    5409 main.go:141] libmachine: Using API Version  1
	I0805 16:36:13.497773    5409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:13.497996    5409 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:13.498110    5409 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:36:13.498216    5409 host.go:66] Checking if "multinode-985000-m02" exists ...
	I0805 16:36:13.498470    5409 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:13.498493    5409 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:13.507076    5409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52933
	I0805 16:36:13.507402    5409 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:13.507746    5409 main.go:141] libmachine: Using API Version  1
	I0805 16:36:13.507763    5409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:13.507966    5409 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:13.508079    5409 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:36:13.508213    5409 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:36:13.508225    5409 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:36:13.508313    5409 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:36:13.508392    5409 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:36:13.508478    5409 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:36:13.508567    5409 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:36:13.541677    5409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:36:13.552011    5409 status.go:257] multinode-985000-m02 status: &{Name:multinode-985000-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:36:13.552034    5409 status.go:255] checking status of multinode-985000-m03 ...
	I0805 16:36:13.552303    5409 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:13.552325    5409 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:13.560888    5409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52936
	I0805 16:36:13.561227    5409 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:13.561542    5409 main.go:141] libmachine: Using API Version  1
	I0805 16:36:13.561554    5409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:13.561765    5409 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:13.561868    5409 main.go:141] libmachine: (multinode-985000-m03) Calling .GetState
	I0805 16:36:13.561944    5409 main.go:141] libmachine: (multinode-985000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:36:13.562012    5409 main.go:141] libmachine: (multinode-985000-m03) DBG | hyperkit pid from json: 5380
	I0805 16:36:13.562989    5409 status.go:330] multinode-985000-m03 host status = "Running" (err=<nil>)
	I0805 16:36:13.562999    5409 host.go:66] Checking if "multinode-985000-m03" exists ...
	I0805 16:36:13.563246    5409 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:13.563272    5409 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:13.571715    5409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52938
	I0805 16:36:13.572035    5409 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:13.572384    5409 main.go:141] libmachine: Using API Version  1
	I0805 16:36:13.572404    5409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:13.572611    5409 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:13.572716    5409 main.go:141] libmachine: (multinode-985000-m03) Calling .GetIP
	I0805 16:36:13.572798    5409 host.go:66] Checking if "multinode-985000-m03" exists ...
	I0805 16:36:13.573037    5409 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:13.573060    5409 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:13.581347    5409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52940
	I0805 16:36:13.581692    5409 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:13.582036    5409 main.go:141] libmachine: Using API Version  1
	I0805 16:36:13.582051    5409 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:13.582265    5409 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:13.582380    5409 main.go:141] libmachine: (multinode-985000-m03) Calling .DriverName
	I0805 16:36:13.582505    5409 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:36:13.582515    5409 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHHostname
	I0805 16:36:13.582592    5409 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHPort
	I0805 16:36:13.582660    5409 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHKeyPath
	I0805 16:36:13.582751    5409 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHUsername
	I0805 16:36:13.582827    5409 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m03/id_rsa Username:docker}
	I0805 16:36:13.612194    5409 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:36:13.622210    5409 status.go:257] multinode-985000-m03 status: &{Name:multinode-985000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-985000 status -v=7 --alsologtostderr: exit status 2 (310.635941ms)

                                                
                                                
-- stdout --
	multinode-985000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-985000-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-985000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:36:15.318318    5420 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:36:15.318587    5420 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:36:15.318592    5420 out.go:304] Setting ErrFile to fd 2...
	I0805 16:36:15.318596    5420 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:36:15.318777    5420 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:36:15.318950    5420 out.go:298] Setting JSON to false
	I0805 16:36:15.318973    5420 mustload.go:65] Loading cluster: multinode-985000
	I0805 16:36:15.319019    5420 notify.go:220] Checking for updates...
	I0805 16:36:15.319271    5420 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:36:15.319289    5420 status.go:255] checking status of multinode-985000 ...
	I0805 16:36:15.319703    5420 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:15.319773    5420 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:15.328294    5420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52944
	I0805 16:36:15.328651    5420 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:15.329063    5420 main.go:141] libmachine: Using API Version  1
	I0805 16:36:15.329073    5420 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:15.329277    5420 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:15.329380    5420 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:36:15.329465    5420 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:36:15.329534    5420 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:36:15.330498    5420 status.go:330] multinode-985000 host status = "Running" (err=<nil>)
	I0805 16:36:15.330522    5420 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:36:15.330757    5420 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:15.330778    5420 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:15.339265    5420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52946
	I0805 16:36:15.339600    5420 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:15.339923    5420 main.go:141] libmachine: Using API Version  1
	I0805 16:36:15.339942    5420 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:15.340172    5420 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:15.340278    5420 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:36:15.340360    5420 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:36:15.340607    5420 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:15.340630    5420 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:15.349006    5420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52948
	I0805 16:36:15.349325    5420 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:15.349656    5420 main.go:141] libmachine: Using API Version  1
	I0805 16:36:15.349667    5420 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:15.349872    5420 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:15.349990    5420 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:36:15.350117    5420 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:36:15.350136    5420 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:36:15.350203    5420 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:36:15.350300    5420 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:36:15.350373    5420 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:36:15.350451    5420 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:36:15.380032    5420 ssh_runner.go:195] Run: systemctl --version
	I0805 16:36:15.384262    5420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:36:15.394948    5420 kubeconfig.go:125] found "multinode-985000" server: "https://192.169.0.13:8443"
	I0805 16:36:15.394973    5420 api_server.go:166] Checking apiserver status ...
	I0805 16:36:15.395009    5420 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:36:15.411838    5420 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1977/cgroup
	W0805 16:36:15.421140    5420 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1977/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 16:36:15.421192    5420 ssh_runner.go:195] Run: ls
	I0805 16:36:15.425234    5420 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:36:15.428293    5420 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0805 16:36:15.428303    5420 status.go:422] multinode-985000 apiserver status = Running (err=<nil>)
	I0805 16:36:15.428311    5420 status.go:257] multinode-985000 status: &{Name:multinode-985000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:36:15.428326    5420 status.go:255] checking status of multinode-985000-m02 ...
	I0805 16:36:15.428585    5420 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:15.428605    5420 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:15.437072    5420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52952
	I0805 16:36:15.437390    5420 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:15.437752    5420 main.go:141] libmachine: Using API Version  1
	I0805 16:36:15.437784    5420 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:15.437980    5420 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:15.438099    5420 main.go:141] libmachine: (multinode-985000-m02) Calling .GetState
	I0805 16:36:15.438191    5420 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:36:15.438253    5420 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:36:15.439237    5420 status.go:330] multinode-985000-m02 host status = "Running" (err=<nil>)
	I0805 16:36:15.439253    5420 host.go:66] Checking if "multinode-985000-m02" exists ...
	I0805 16:36:15.439524    5420 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:15.439551    5420 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:15.448167    5420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52954
	I0805 16:36:15.448524    5420 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:15.448880    5420 main.go:141] libmachine: Using API Version  1
	I0805 16:36:15.448897    5420 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:15.449143    5420 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:15.449274    5420 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:36:15.449359    5420 host.go:66] Checking if "multinode-985000-m02" exists ...
	I0805 16:36:15.449609    5420 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:15.449632    5420 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:15.457964    5420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52956
	I0805 16:36:15.458299    5420 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:15.458654    5420 main.go:141] libmachine: Using API Version  1
	I0805 16:36:15.458673    5420 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:15.458872    5420 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:15.458966    5420 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:36:15.459081    5420 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:36:15.459093    5420 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:36:15.459164    5420 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:36:15.459244    5420 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:36:15.459325    5420 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:36:15.459394    5420 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:36:15.492591    5420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:36:15.503027    5420 status.go:257] multinode-985000-m02 status: &{Name:multinode-985000-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:36:15.503041    5420 status.go:255] checking status of multinode-985000-m03 ...
	I0805 16:36:15.503320    5420 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:15.503350    5420 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:15.511883    5420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52959
	I0805 16:36:15.512226    5420 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:15.512540    5420 main.go:141] libmachine: Using API Version  1
	I0805 16:36:15.512553    5420 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:15.512766    5420 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:15.512886    5420 main.go:141] libmachine: (multinode-985000-m03) Calling .GetState
	I0805 16:36:15.512965    5420 main.go:141] libmachine: (multinode-985000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:36:15.513040    5420 main.go:141] libmachine: (multinode-985000-m03) DBG | hyperkit pid from json: 5380
	I0805 16:36:15.514045    5420 status.go:330] multinode-985000-m03 host status = "Running" (err=<nil>)
	I0805 16:36:15.514054    5420 host.go:66] Checking if "multinode-985000-m03" exists ...
	I0805 16:36:15.514311    5420 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:15.514339    5420 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:15.522769    5420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52961
	I0805 16:36:15.523118    5420 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:15.523462    5420 main.go:141] libmachine: Using API Version  1
	I0805 16:36:15.523474    5420 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:15.523692    5420 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:15.523803    5420 main.go:141] libmachine: (multinode-985000-m03) Calling .GetIP
	I0805 16:36:15.523886    5420 host.go:66] Checking if "multinode-985000-m03" exists ...
	I0805 16:36:15.524131    5420 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:15.524154    5420 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:15.532420    5420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52963
	I0805 16:36:15.532766    5420 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:15.533089    5420 main.go:141] libmachine: Using API Version  1
	I0805 16:36:15.533103    5420 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:15.533329    5420 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:15.533449    5420 main.go:141] libmachine: (multinode-985000-m03) Calling .DriverName
	I0805 16:36:15.533580    5420 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:36:15.533591    5420 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHHostname
	I0805 16:36:15.533674    5420 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHPort
	I0805 16:36:15.533750    5420 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHKeyPath
	I0805 16:36:15.533835    5420 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHUsername
	I0805 16:36:15.533924    5420 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m03/id_rsa Username:docker}
	I0805 16:36:15.563629    5420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:36:15.573830    5420 status.go:257] multinode-985000-m03 status: &{Name:multinode-985000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-985000 status -v=7 --alsologtostderr: exit status 2 (305.887708ms)

                                                
                                                
-- stdout --
	multinode-985000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-985000-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-985000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:36:17.145948    5432 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:36:17.146205    5432 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:36:17.146210    5432 out.go:304] Setting ErrFile to fd 2...
	I0805 16:36:17.146214    5432 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:36:17.146386    5432 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:36:17.146563    5432 out.go:298] Setting JSON to false
	I0805 16:36:17.146585    5432 mustload.go:65] Loading cluster: multinode-985000
	I0805 16:36:17.146629    5432 notify.go:220] Checking for updates...
	I0805 16:36:17.146888    5432 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:36:17.146905    5432 status.go:255] checking status of multinode-985000 ...
	I0805 16:36:17.147312    5432 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:17.147361    5432 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:17.155903    5432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52967
	I0805 16:36:17.156256    5432 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:17.156694    5432 main.go:141] libmachine: Using API Version  1
	I0805 16:36:17.156707    5432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:17.156904    5432 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:17.157059    5432 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:36:17.157157    5432 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:36:17.157225    5432 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:36:17.158224    5432 status.go:330] multinode-985000 host status = "Running" (err=<nil>)
	I0805 16:36:17.158249    5432 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:36:17.158503    5432 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:17.158524    5432 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:17.167276    5432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52969
	I0805 16:36:17.167616    5432 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:17.167988    5432 main.go:141] libmachine: Using API Version  1
	I0805 16:36:17.168005    5432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:17.168233    5432 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:17.168346    5432 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:36:17.168429    5432 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:36:17.168682    5432 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:17.168708    5432 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:17.177140    5432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52971
	I0805 16:36:17.177459    5432 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:17.177780    5432 main.go:141] libmachine: Using API Version  1
	I0805 16:36:17.177797    5432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:17.177993    5432 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:17.178095    5432 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:36:17.178245    5432 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:36:17.178268    5432 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:36:17.178362    5432 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:36:17.178456    5432 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:36:17.178542    5432 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:36:17.178627    5432 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:36:17.208777    5432 ssh_runner.go:195] Run: systemctl --version
	I0805 16:36:17.213103    5432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:36:17.224704    5432 kubeconfig.go:125] found "multinode-985000" server: "https://192.169.0.13:8443"
	I0805 16:36:17.224730    5432 api_server.go:166] Checking apiserver status ...
	I0805 16:36:17.224773    5432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:36:17.236488    5432 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1977/cgroup
	W0805 16:36:17.244365    5432 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1977/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 16:36:17.244424    5432 ssh_runner.go:195] Run: ls
	I0805 16:36:17.247551    5432 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:36:17.250523    5432 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0805 16:36:17.250533    5432 status.go:422] multinode-985000 apiserver status = Running (err=<nil>)
	I0805 16:36:17.250542    5432 status.go:257] multinode-985000 status: &{Name:multinode-985000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:36:17.250553    5432 status.go:255] checking status of multinode-985000-m02 ...
	I0805 16:36:17.250806    5432 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:17.250828    5432 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:17.259450    5432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52975
	I0805 16:36:17.259779    5432 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:17.260087    5432 main.go:141] libmachine: Using API Version  1
	I0805 16:36:17.260099    5432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:17.260310    5432 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:17.260424    5432 main.go:141] libmachine: (multinode-985000-m02) Calling .GetState
	I0805 16:36:17.260512    5432 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:36:17.260579    5432 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:36:17.261583    5432 status.go:330] multinode-985000-m02 host status = "Running" (err=<nil>)
	I0805 16:36:17.261594    5432 host.go:66] Checking if "multinode-985000-m02" exists ...
	I0805 16:36:17.261843    5432 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:17.261867    5432 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:17.270172    5432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52977
	I0805 16:36:17.270485    5432 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:17.270826    5432 main.go:141] libmachine: Using API Version  1
	I0805 16:36:17.270844    5432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:17.271031    5432 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:17.271169    5432 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:36:17.271260    5432 host.go:66] Checking if "multinode-985000-m02" exists ...
	I0805 16:36:17.271507    5432 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:17.271533    5432 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:17.279735    5432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52979
	I0805 16:36:17.280063    5432 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:17.280429    5432 main.go:141] libmachine: Using API Version  1
	I0805 16:36:17.280448    5432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:17.280662    5432 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:17.280768    5432 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:36:17.280901    5432 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:36:17.280913    5432 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:36:17.281002    5432 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:36:17.281091    5432 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:36:17.281209    5432 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:36:17.281297    5432 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:36:17.315309    5432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:36:17.325771    5432 status.go:257] multinode-985000-m02 status: &{Name:multinode-985000-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:36:17.325787    5432 status.go:255] checking status of multinode-985000-m03 ...
	I0805 16:36:17.326079    5432 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:17.326100    5432 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:17.334526    5432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52982
	I0805 16:36:17.334871    5432 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:17.335192    5432 main.go:141] libmachine: Using API Version  1
	I0805 16:36:17.335202    5432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:17.335395    5432 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:17.335515    5432 main.go:141] libmachine: (multinode-985000-m03) Calling .GetState
	I0805 16:36:17.335597    5432 main.go:141] libmachine: (multinode-985000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:36:17.335677    5432 main.go:141] libmachine: (multinode-985000-m03) DBG | hyperkit pid from json: 5380
	I0805 16:36:17.336685    5432 status.go:330] multinode-985000-m03 host status = "Running" (err=<nil>)
	I0805 16:36:17.336694    5432 host.go:66] Checking if "multinode-985000-m03" exists ...
	I0805 16:36:17.336946    5432 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:17.336969    5432 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:17.345215    5432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52984
	I0805 16:36:17.345548    5432 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:17.345878    5432 main.go:141] libmachine: Using API Version  1
	I0805 16:36:17.345890    5432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:17.346102    5432 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:17.346226    5432 main.go:141] libmachine: (multinode-985000-m03) Calling .GetIP
	I0805 16:36:17.346304    5432 host.go:66] Checking if "multinode-985000-m03" exists ...
	I0805 16:36:17.346546    5432 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:17.346570    5432 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:17.354722    5432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52986
	I0805 16:36:17.355053    5432 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:17.355387    5432 main.go:141] libmachine: Using API Version  1
	I0805 16:36:17.355399    5432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:17.355614    5432 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:17.355738    5432 main.go:141] libmachine: (multinode-985000-m03) Calling .DriverName
	I0805 16:36:17.355858    5432 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:36:17.355870    5432 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHHostname
	I0805 16:36:17.355950    5432 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHPort
	I0805 16:36:17.356028    5432 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHKeyPath
	I0805 16:36:17.356109    5432 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHUsername
	I0805 16:36:17.356184    5432 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m03/id_rsa Username:docker}
	I0805 16:36:17.385159    5432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:36:17.395450    5432 status.go:257] multinode-985000-m03 status: &{Name:multinode-985000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0805 16:36:19.196835    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-985000 status -v=7 --alsologtostderr: exit status 2 (305.104791ms)

                                                
                                                
-- stdout --
	multinode-985000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-985000-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-985000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:36:21.581907    5443 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:36:21.582101    5443 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:36:21.582107    5443 out.go:304] Setting ErrFile to fd 2...
	I0805 16:36:21.582111    5443 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:36:21.582287    5443 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:36:21.582459    5443 out.go:298] Setting JSON to false
	I0805 16:36:21.582482    5443 mustload.go:65] Loading cluster: multinode-985000
	I0805 16:36:21.582525    5443 notify.go:220] Checking for updates...
	I0805 16:36:21.582830    5443 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:36:21.582847    5443 status.go:255] checking status of multinode-985000 ...
	I0805 16:36:21.583184    5443 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:21.583242    5443 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:21.591827    5443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52990
	I0805 16:36:21.592199    5443 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:21.592701    5443 main.go:141] libmachine: Using API Version  1
	I0805 16:36:21.592725    5443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:21.592929    5443 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:21.593024    5443 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:36:21.593103    5443 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:36:21.593169    5443 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:36:21.594217    5443 status.go:330] multinode-985000 host status = "Running" (err=<nil>)
	I0805 16:36:21.594235    5443 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:36:21.594473    5443 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:21.594492    5443 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:21.602889    5443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52992
	I0805 16:36:21.603257    5443 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:21.603576    5443 main.go:141] libmachine: Using API Version  1
	I0805 16:36:21.603586    5443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:21.603818    5443 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:21.603927    5443 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:36:21.604016    5443 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:36:21.604254    5443 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:21.604275    5443 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:21.612653    5443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52994
	I0805 16:36:21.612961    5443 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:21.613291    5443 main.go:141] libmachine: Using API Version  1
	I0805 16:36:21.613305    5443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:21.613503    5443 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:21.613622    5443 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:36:21.613760    5443 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:36:21.613781    5443 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:36:21.613876    5443 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:36:21.613958    5443 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:36:21.614039    5443 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:36:21.614131    5443 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:36:21.645187    5443 ssh_runner.go:195] Run: systemctl --version
	I0805 16:36:21.649997    5443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:36:21.660789    5443 kubeconfig.go:125] found "multinode-985000" server: "https://192.169.0.13:8443"
	I0805 16:36:21.660814    5443 api_server.go:166] Checking apiserver status ...
	I0805 16:36:21.660851    5443 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:36:21.671509    5443 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1977/cgroup
	W0805 16:36:21.679165    5443 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1977/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 16:36:21.679220    5443 ssh_runner.go:195] Run: ls
	I0805 16:36:21.682489    5443 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:36:21.685556    5443 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0805 16:36:21.685566    5443 status.go:422] multinode-985000 apiserver status = Running (err=<nil>)
	I0805 16:36:21.685582    5443 status.go:257] multinode-985000 status: &{Name:multinode-985000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:36:21.685593    5443 status.go:255] checking status of multinode-985000-m02 ...
	I0805 16:36:21.685838    5443 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:21.685858    5443 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:21.694501    5443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52998
	I0805 16:36:21.694865    5443 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:21.695208    5443 main.go:141] libmachine: Using API Version  1
	I0805 16:36:21.695224    5443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:21.695419    5443 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:21.695580    5443 main.go:141] libmachine: (multinode-985000-m02) Calling .GetState
	I0805 16:36:21.695672    5443 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:36:21.695759    5443 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:36:21.696750    5443 status.go:330] multinode-985000-m02 host status = "Running" (err=<nil>)
	I0805 16:36:21.696761    5443 host.go:66] Checking if "multinode-985000-m02" exists ...
	I0805 16:36:21.697025    5443 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:21.697047    5443 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:21.705482    5443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53000
	I0805 16:36:21.705833    5443 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:21.706165    5443 main.go:141] libmachine: Using API Version  1
	I0805 16:36:21.706177    5443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:21.706409    5443 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:21.706525    5443 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:36:21.706629    5443 host.go:66] Checking if "multinode-985000-m02" exists ...
	I0805 16:36:21.706945    5443 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:21.706969    5443 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:21.715701    5443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53002
	I0805 16:36:21.716033    5443 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:21.716355    5443 main.go:141] libmachine: Using API Version  1
	I0805 16:36:21.716366    5443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:21.716580    5443 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:21.716691    5443 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:36:21.716828    5443 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:36:21.716840    5443 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:36:21.716922    5443 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:36:21.717028    5443 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:36:21.717105    5443 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:36:21.717180    5443 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:36:21.750288    5443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:36:21.760814    5443 status.go:257] multinode-985000-m02 status: &{Name:multinode-985000-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:36:21.760836    5443 status.go:255] checking status of multinode-985000-m03 ...
	I0805 16:36:21.761127    5443 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:21.761150    5443 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:21.769835    5443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53005
	I0805 16:36:21.770171    5443 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:21.770516    5443 main.go:141] libmachine: Using API Version  1
	I0805 16:36:21.770531    5443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:21.770719    5443 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:21.770842    5443 main.go:141] libmachine: (multinode-985000-m03) Calling .GetState
	I0805 16:36:21.770925    5443 main.go:141] libmachine: (multinode-985000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:36:21.771002    5443 main.go:141] libmachine: (multinode-985000-m03) DBG | hyperkit pid from json: 5380
	I0805 16:36:21.772026    5443 status.go:330] multinode-985000-m03 host status = "Running" (err=<nil>)
	I0805 16:36:21.772036    5443 host.go:66] Checking if "multinode-985000-m03" exists ...
	I0805 16:36:21.772275    5443 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:21.772308    5443 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:21.780826    5443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53007
	I0805 16:36:21.781159    5443 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:21.781502    5443 main.go:141] libmachine: Using API Version  1
	I0805 16:36:21.781512    5443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:21.781748    5443 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:21.781856    5443 main.go:141] libmachine: (multinode-985000-m03) Calling .GetIP
	I0805 16:36:21.781950    5443 host.go:66] Checking if "multinode-985000-m03" exists ...
	I0805 16:36:21.782240    5443 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:21.782264    5443 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:21.790807    5443 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53009
	I0805 16:36:21.791135    5443 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:21.791450    5443 main.go:141] libmachine: Using API Version  1
	I0805 16:36:21.791459    5443 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:21.791668    5443 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:21.791777    5443 main.go:141] libmachine: (multinode-985000-m03) Calling .DriverName
	I0805 16:36:21.791902    5443 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:36:21.791913    5443 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHHostname
	I0805 16:36:21.791984    5443 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHPort
	I0805 16:36:21.792064    5443 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHKeyPath
	I0805 16:36:21.792153    5443 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHUsername
	I0805 16:36:21.792224    5443 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m03/id_rsa Username:docker}
	I0805 16:36:21.821213    5443 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:36:21.832352    5443 status.go:257] multinode-985000-m03 status: &{Name:multinode-985000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-985000 status -v=7 --alsologtostderr: exit status 2 (308.449193ms)

                                                
                                                
-- stdout --
	multinode-985000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-985000-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-985000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:36:24.837103    5454 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:36:24.837294    5454 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:36:24.837299    5454 out.go:304] Setting ErrFile to fd 2...
	I0805 16:36:24.837303    5454 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:36:24.837468    5454 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:36:24.837672    5454 out.go:298] Setting JSON to false
	I0805 16:36:24.837697    5454 mustload.go:65] Loading cluster: multinode-985000
	I0805 16:36:24.837734    5454 notify.go:220] Checking for updates...
	I0805 16:36:24.837981    5454 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:36:24.837999    5454 status.go:255] checking status of multinode-985000 ...
	I0805 16:36:24.838380    5454 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:24.838424    5454 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:24.847236    5454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53013
	I0805 16:36:24.847568    5454 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:24.847975    5454 main.go:141] libmachine: Using API Version  1
	I0805 16:36:24.847988    5454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:24.848203    5454 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:24.848318    5454 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:36:24.848397    5454 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:36:24.848470    5454 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:36:24.849459    5454 status.go:330] multinode-985000 host status = "Running" (err=<nil>)
	I0805 16:36:24.849476    5454 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:36:24.849704    5454 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:24.849722    5454 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:24.857995    5454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53015
	I0805 16:36:24.858330    5454 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:24.858672    5454 main.go:141] libmachine: Using API Version  1
	I0805 16:36:24.858684    5454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:24.858936    5454 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:24.859066    5454 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:36:24.859160    5454 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:36:24.859411    5454 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:24.859449    5454 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:24.868912    5454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53017
	I0805 16:36:24.869287    5454 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:24.869613    5454 main.go:141] libmachine: Using API Version  1
	I0805 16:36:24.869627    5454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:24.869825    5454 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:24.869928    5454 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:36:24.870069    5454 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:36:24.870092    5454 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:36:24.870164    5454 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:36:24.870242    5454 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:36:24.870321    5454 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:36:24.870401    5454 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:36:24.900985    5454 ssh_runner.go:195] Run: systemctl --version
	I0805 16:36:24.905268    5454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:36:24.916816    5454 kubeconfig.go:125] found "multinode-985000" server: "https://192.169.0.13:8443"
	I0805 16:36:24.916840    5454 api_server.go:166] Checking apiserver status ...
	I0805 16:36:24.916875    5454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:36:24.928146    5454 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1977/cgroup
	W0805 16:36:24.935902    5454 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1977/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 16:36:24.935953    5454 ssh_runner.go:195] Run: ls
	I0805 16:36:24.939088    5454 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:36:24.942174    5454 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0805 16:36:24.942184    5454 status.go:422] multinode-985000 apiserver status = Running (err=<nil>)
	I0805 16:36:24.942193    5454 status.go:257] multinode-985000 status: &{Name:multinode-985000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:36:24.942204    5454 status.go:255] checking status of multinode-985000-m02 ...
	I0805 16:36:24.942439    5454 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:24.942458    5454 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:24.950969    5454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53021
	I0805 16:36:24.951304    5454 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:24.951637    5454 main.go:141] libmachine: Using API Version  1
	I0805 16:36:24.951648    5454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:24.951851    5454 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:24.951957    5454 main.go:141] libmachine: (multinode-985000-m02) Calling .GetState
	I0805 16:36:24.952051    5454 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:36:24.952120    5454 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:36:24.953119    5454 status.go:330] multinode-985000-m02 host status = "Running" (err=<nil>)
	I0805 16:36:24.953129    5454 host.go:66] Checking if "multinode-985000-m02" exists ...
	I0805 16:36:24.953390    5454 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:24.953413    5454 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:24.961923    5454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53023
	I0805 16:36:24.962237    5454 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:24.962545    5454 main.go:141] libmachine: Using API Version  1
	I0805 16:36:24.962559    5454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:24.962749    5454 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:24.962862    5454 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:36:24.962953    5454 host.go:66] Checking if "multinode-985000-m02" exists ...
	I0805 16:36:24.963208    5454 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:24.963228    5454 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:24.971698    5454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53025
	I0805 16:36:24.972020    5454 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:24.972368    5454 main.go:141] libmachine: Using API Version  1
	I0805 16:36:24.972384    5454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:24.972599    5454 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:24.972713    5454 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:36:24.972840    5454 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:36:24.972852    5454 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:36:24.972952    5454 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:36:24.973034    5454 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:36:24.973136    5454 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:36:24.973228    5454 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:36:25.006163    5454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:36:25.016631    5454 status.go:257] multinode-985000-m02 status: &{Name:multinode-985000-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:36:25.016646    5454 status.go:255] checking status of multinode-985000-m03 ...
	I0805 16:36:25.016918    5454 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:25.016942    5454 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:25.025799    5454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53028
	I0805 16:36:25.026166    5454 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:25.026492    5454 main.go:141] libmachine: Using API Version  1
	I0805 16:36:25.026505    5454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:25.026707    5454 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:25.026810    5454 main.go:141] libmachine: (multinode-985000-m03) Calling .GetState
	I0805 16:36:25.026886    5454 main.go:141] libmachine: (multinode-985000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:36:25.026972    5454 main.go:141] libmachine: (multinode-985000-m03) DBG | hyperkit pid from json: 5380
	I0805 16:36:25.027995    5454 status.go:330] multinode-985000-m03 host status = "Running" (err=<nil>)
	I0805 16:36:25.028005    5454 host.go:66] Checking if "multinode-985000-m03" exists ...
	I0805 16:36:25.028252    5454 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:25.028284    5454 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:25.037017    5454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53030
	I0805 16:36:25.037367    5454 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:25.037698    5454 main.go:141] libmachine: Using API Version  1
	I0805 16:36:25.037713    5454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:25.037924    5454 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:25.038043    5454 main.go:141] libmachine: (multinode-985000-m03) Calling .GetIP
	I0805 16:36:25.038133    5454 host.go:66] Checking if "multinode-985000-m03" exists ...
	I0805 16:36:25.038377    5454 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:25.038419    5454 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:25.046685    5454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53032
	I0805 16:36:25.047025    5454 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:25.047381    5454 main.go:141] libmachine: Using API Version  1
	I0805 16:36:25.047400    5454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:25.047641    5454 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:25.047777    5454 main.go:141] libmachine: (multinode-985000-m03) Calling .DriverName
	I0805 16:36:25.047915    5454 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:36:25.047926    5454 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHHostname
	I0805 16:36:25.047998    5454 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHPort
	I0805 16:36:25.048081    5454 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHKeyPath
	I0805 16:36:25.048157    5454 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHUsername
	I0805 16:36:25.048226    5454 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m03/id_rsa Username:docker}
	I0805 16:36:25.077701    5454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:36:25.088911    5454 status.go:257] multinode-985000-m03 status: &{Name:multinode-985000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-985000 status -v=7 --alsologtostderr: exit status 2 (307.523216ms)

                                                
                                                
-- stdout --
	multinode-985000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-985000-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-985000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:36:34.822919    5466 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:36:34.823119    5466 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:36:34.823124    5466 out.go:304] Setting ErrFile to fd 2...
	I0805 16:36:34.823128    5466 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:36:34.823308    5466 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:36:34.823481    5466 out.go:298] Setting JSON to false
	I0805 16:36:34.823505    5466 mustload.go:65] Loading cluster: multinode-985000
	I0805 16:36:34.823544    5466 notify.go:220] Checking for updates...
	I0805 16:36:34.823812    5466 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:36:34.823829    5466 status.go:255] checking status of multinode-985000 ...
	I0805 16:36:34.824170    5466 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:34.824222    5466 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:34.832745    5466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53036
	I0805 16:36:34.833089    5466 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:34.833559    5466 main.go:141] libmachine: Using API Version  1
	I0805 16:36:34.833574    5466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:34.833770    5466 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:34.833886    5466 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:36:34.833982    5466 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:36:34.834047    5466 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:36:34.835008    5466 status.go:330] multinode-985000 host status = "Running" (err=<nil>)
	I0805 16:36:34.835031    5466 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:36:34.835281    5466 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:34.835304    5466 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:34.843470    5466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53038
	I0805 16:36:34.843794    5466 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:34.844160    5466 main.go:141] libmachine: Using API Version  1
	I0805 16:36:34.844175    5466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:34.844366    5466 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:34.844475    5466 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:36:34.844558    5466 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:36:34.844803    5466 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:34.844828    5466 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:34.853390    5466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53040
	I0805 16:36:34.853736    5466 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:34.854076    5466 main.go:141] libmachine: Using API Version  1
	I0805 16:36:34.854086    5466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:34.854288    5466 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:34.854400    5466 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:36:34.854565    5466 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:36:34.854586    5466 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:36:34.854661    5466 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:36:34.854745    5466 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:36:34.854827    5466 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:36:34.854907    5466 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:36:34.886022    5466 ssh_runner.go:195] Run: systemctl --version
	I0805 16:36:34.890355    5466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:36:34.901782    5466 kubeconfig.go:125] found "multinode-985000" server: "https://192.169.0.13:8443"
	I0805 16:36:34.901809    5466 api_server.go:166] Checking apiserver status ...
	I0805 16:36:34.901846    5466 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:36:34.912557    5466 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1977/cgroup
	W0805 16:36:34.919926    5466 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1977/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 16:36:34.919969    5466 ssh_runner.go:195] Run: ls
	I0805 16:36:34.923181    5466 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:36:34.926282    5466 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0805 16:36:34.926295    5466 status.go:422] multinode-985000 apiserver status = Running (err=<nil>)
	I0805 16:36:34.926306    5466 status.go:257] multinode-985000 status: &{Name:multinode-985000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:36:34.926315    5466 status.go:255] checking status of multinode-985000-m02 ...
	I0805 16:36:34.926579    5466 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:34.926600    5466 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:34.935166    5466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53044
	I0805 16:36:34.935526    5466 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:34.935885    5466 main.go:141] libmachine: Using API Version  1
	I0805 16:36:34.935901    5466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:34.936121    5466 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:34.936254    5466 main.go:141] libmachine: (multinode-985000-m02) Calling .GetState
	I0805 16:36:34.936336    5466 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:36:34.936414    5466 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:36:34.937376    5466 status.go:330] multinode-985000-m02 host status = "Running" (err=<nil>)
	I0805 16:36:34.937386    5466 host.go:66] Checking if "multinode-985000-m02" exists ...
	I0805 16:36:34.937625    5466 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:34.937647    5466 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:34.946004    5466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53046
	I0805 16:36:34.946351    5466 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:34.946656    5466 main.go:141] libmachine: Using API Version  1
	I0805 16:36:34.946669    5466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:34.946891    5466 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:34.947006    5466 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:36:34.947083    5466 host.go:66] Checking if "multinode-985000-m02" exists ...
	I0805 16:36:34.947344    5466 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:34.947369    5466 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:34.955663    5466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53048
	I0805 16:36:34.955999    5466 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:34.956342    5466 main.go:141] libmachine: Using API Version  1
	I0805 16:36:34.956359    5466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:34.956562    5466 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:34.956668    5466 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:36:34.956775    5466 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:36:34.956786    5466 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:36:34.956866    5466 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:36:34.956938    5466 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:36:34.957023    5466 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:36:34.957099    5466 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:36:34.991924    5466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:36:35.002609    5466 status.go:257] multinode-985000-m02 status: &{Name:multinode-985000-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:36:35.002629    5466 status.go:255] checking status of multinode-985000-m03 ...
	I0805 16:36:35.002911    5466 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:35.002934    5466 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:35.011596    5466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53051
	I0805 16:36:35.011961    5466 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:35.012301    5466 main.go:141] libmachine: Using API Version  1
	I0805 16:36:35.012312    5466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:35.012493    5466 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:35.012607    5466 main.go:141] libmachine: (multinode-985000-m03) Calling .GetState
	I0805 16:36:35.012699    5466 main.go:141] libmachine: (multinode-985000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:36:35.012792    5466 main.go:141] libmachine: (multinode-985000-m03) DBG | hyperkit pid from json: 5380
	I0805 16:36:35.013735    5466 status.go:330] multinode-985000-m03 host status = "Running" (err=<nil>)
	I0805 16:36:35.013745    5466 host.go:66] Checking if "multinode-985000-m03" exists ...
	I0805 16:36:35.013977    5466 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:35.014008    5466 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:35.022487    5466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53053
	I0805 16:36:35.022840    5466 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:35.023171    5466 main.go:141] libmachine: Using API Version  1
	I0805 16:36:35.023187    5466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:35.023391    5466 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:35.023496    5466 main.go:141] libmachine: (multinode-985000-m03) Calling .GetIP
	I0805 16:36:35.023567    5466 host.go:66] Checking if "multinode-985000-m03" exists ...
	I0805 16:36:35.023814    5466 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:35.023838    5466 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:35.032260    5466 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53055
	I0805 16:36:35.032623    5466 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:35.033002    5466 main.go:141] libmachine: Using API Version  1
	I0805 16:36:35.033020    5466 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:35.033270    5466 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:35.033392    5466 main.go:141] libmachine: (multinode-985000-m03) Calling .DriverName
	I0805 16:36:35.033539    5466 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:36:35.033558    5466 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHHostname
	I0805 16:36:35.033642    5466 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHPort
	I0805 16:36:35.033718    5466 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHKeyPath
	I0805 16:36:35.033797    5466 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHUsername
	I0805 16:36:35.033872    5466 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m03/id_rsa Username:docker}
	I0805 16:36:35.063645    5466 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:36:35.074085    5466 status.go:257] multinode-985000-m03 status: &{Name:multinode-985000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
E0805 16:36:50.598505    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-985000 status -v=7 --alsologtostderr: exit status 2 (304.554984ms)

                                                
                                                
-- stdout --
	multinode-985000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-985000-m02
	type: Worker
	host: Running
	kubelet: Stopped
	
	multinode-985000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:36:51.675290    5479 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:36:51.675563    5479 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:36:51.675569    5479 out.go:304] Setting ErrFile to fd 2...
	I0805 16:36:51.675573    5479 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:36:51.675737    5479 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:36:51.675919    5479 out.go:298] Setting JSON to false
	I0805 16:36:51.675941    5479 mustload.go:65] Loading cluster: multinode-985000
	I0805 16:36:51.675991    5479 notify.go:220] Checking for updates...
	I0805 16:36:51.676221    5479 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:36:51.676238    5479 status.go:255] checking status of multinode-985000 ...
	I0805 16:36:51.676563    5479 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:51.676604    5479 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:51.685282    5479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53059
	I0805 16:36:51.685602    5479 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:51.686069    5479 main.go:141] libmachine: Using API Version  1
	I0805 16:36:51.686086    5479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:51.686282    5479 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:51.686408    5479 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:36:51.686490    5479 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:36:51.686562    5479 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:36:51.687514    5479 status.go:330] multinode-985000 host status = "Running" (err=<nil>)
	I0805 16:36:51.687536    5479 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:36:51.687773    5479 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:51.687792    5479 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:51.695929    5479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53061
	I0805 16:36:51.696254    5479 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:51.696592    5479 main.go:141] libmachine: Using API Version  1
	I0805 16:36:51.696604    5479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:51.696806    5479 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:51.696913    5479 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:36:51.696993    5479 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:36:51.697249    5479 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:51.697273    5479 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:51.705752    5479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53063
	I0805 16:36:51.706110    5479 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:51.706425    5479 main.go:141] libmachine: Using API Version  1
	I0805 16:36:51.706434    5479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:51.706631    5479 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:51.706742    5479 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:36:51.706887    5479 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:36:51.706910    5479 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:36:51.706987    5479 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:36:51.707093    5479 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:36:51.707174    5479 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:36:51.707258    5479 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:36:51.737537    5479 ssh_runner.go:195] Run: systemctl --version
	I0805 16:36:51.741864    5479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:36:51.753025    5479 kubeconfig.go:125] found "multinode-985000" server: "https://192.169.0.13:8443"
	I0805 16:36:51.753050    5479 api_server.go:166] Checking apiserver status ...
	I0805 16:36:51.753087    5479 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:36:51.764198    5479 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1977/cgroup
	W0805 16:36:51.771364    5479 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1977/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 16:36:51.771406    5479 ssh_runner.go:195] Run: ls
	I0805 16:36:51.774537    5479 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:36:51.777561    5479 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0805 16:36:51.777571    5479 status.go:422] multinode-985000 apiserver status = Running (err=<nil>)
	I0805 16:36:51.777586    5479 status.go:257] multinode-985000 status: &{Name:multinode-985000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:36:51.777597    5479 status.go:255] checking status of multinode-985000-m02 ...
	I0805 16:36:51.777838    5479 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:51.777857    5479 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:51.786325    5479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53067
	I0805 16:36:51.786645    5479 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:51.786962    5479 main.go:141] libmachine: Using API Version  1
	I0805 16:36:51.786973    5479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:51.787198    5479 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:51.787313    5479 main.go:141] libmachine: (multinode-985000-m02) Calling .GetState
	I0805 16:36:51.787393    5479 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:36:51.787496    5479 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:36:51.788406    5479 status.go:330] multinode-985000-m02 host status = "Running" (err=<nil>)
	I0805 16:36:51.788414    5479 host.go:66] Checking if "multinode-985000-m02" exists ...
	I0805 16:36:51.788688    5479 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:51.788713    5479 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:51.797362    5479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53069
	I0805 16:36:51.797688    5479 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:51.798026    5479 main.go:141] libmachine: Using API Version  1
	I0805 16:36:51.798039    5479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:51.798244    5479 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:51.798354    5479 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:36:51.798433    5479 host.go:66] Checking if "multinode-985000-m02" exists ...
	I0805 16:36:51.798685    5479 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:51.798709    5479 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:51.807121    5479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53071
	I0805 16:36:51.807439    5479 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:51.807776    5479 main.go:141] libmachine: Using API Version  1
	I0805 16:36:51.807793    5479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:51.808006    5479 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:51.808106    5479 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:36:51.808222    5479 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:36:51.808233    5479 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:36:51.808330    5479 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:36:51.808434    5479 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:36:51.808529    5479 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:36:51.808609    5479 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:36:51.841994    5479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:36:51.852240    5479 status.go:257] multinode-985000-m02 status: &{Name:multinode-985000-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:36:51.852254    5479 status.go:255] checking status of multinode-985000-m03 ...
	I0805 16:36:51.852541    5479 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:51.852564    5479 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:51.861196    5479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53074
	I0805 16:36:51.861538    5479 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:51.861879    5479 main.go:141] libmachine: Using API Version  1
	I0805 16:36:51.861892    5479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:51.862105    5479 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:51.862218    5479 main.go:141] libmachine: (multinode-985000-m03) Calling .GetState
	I0805 16:36:51.862289    5479 main.go:141] libmachine: (multinode-985000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:36:51.862363    5479 main.go:141] libmachine: (multinode-985000-m03) DBG | hyperkit pid from json: 5380
	I0805 16:36:51.863301    5479 status.go:330] multinode-985000-m03 host status = "Running" (err=<nil>)
	I0805 16:36:51.863310    5479 host.go:66] Checking if "multinode-985000-m03" exists ...
	I0805 16:36:51.863565    5479 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:51.863587    5479 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:51.871934    5479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53076
	I0805 16:36:51.872259    5479 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:51.872563    5479 main.go:141] libmachine: Using API Version  1
	I0805 16:36:51.872579    5479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:51.872804    5479 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:51.872911    5479 main.go:141] libmachine: (multinode-985000-m03) Calling .GetIP
	I0805 16:36:51.872998    5479 host.go:66] Checking if "multinode-985000-m03" exists ...
	I0805 16:36:51.873253    5479 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:36:51.873283    5479 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:36:51.881606    5479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53078
	I0805 16:36:51.881923    5479 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:36:51.882240    5479 main.go:141] libmachine: Using API Version  1
	I0805 16:36:51.882249    5479 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:36:51.882468    5479 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:36:51.882597    5479 main.go:141] libmachine: (multinode-985000-m03) Calling .DriverName
	I0805 16:36:51.882739    5479 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:36:51.882751    5479 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHHostname
	I0805 16:36:51.882823    5479 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHPort
	I0805 16:36:51.882897    5479 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHKeyPath
	I0805 16:36:51.882973    5479 main.go:141] libmachine: (multinode-985000-m03) Calling .GetSSHUsername
	I0805 16:36:51.883046    5479 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m03/id_rsa Username:docker}
	I0805 16:36:51.912309    5479 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:36:51.923528    5479 status.go:257] multinode-985000-m03 status: &{Name:multinode-985000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-985000 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000
helpers_test.go:244: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-985000 logs -n 25: (1.938003107s)
helpers_test.go:252: TestMultiNode/serial/StartAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| kubectl | -p multinode-985000 -- rollout       | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:22 PDT |                     |
	|         | status deployment/busybox            |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o   | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:32 PDT | 05 Aug 24 16:32 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o   | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o   | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o   | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o   | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o   | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o   | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o   | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o   | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o   | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o   | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec          | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g --           |                  |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec          | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT |                     |
	|         | busybox-fc5497c4f-ptd5b --           |                  |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec          | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g --           |                  |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec          | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT |                     |
	|         | busybox-fc5497c4f-ptd5b --           |                  |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec          | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g -- nslookup  |                  |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec          | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT |                     |
	|         | busybox-fc5497c4f-ptd5b -- nslookup  |                  |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o   | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec          | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g              |                  |         |         |                     |                     |
	|         | -- sh -c nslookup                    |                  |         |         |                     |                     |
	|         | host.minikube.internal | awk         |                  |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec          | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g -- sh        |                  |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec          | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT |                     |
	|         | busybox-fc5497c4f-ptd5b              |                  |         |         |                     |                     |
	|         | -- sh -c nslookup                    |                  |         |         |                     |                     |
	|         | host.minikube.internal | awk         |                  |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                  |         |         |                     |                     |
	| node    | add -p multinode-985000 -v 3         | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:35 PDT |
	|         | --alsologtostderr                    |                  |         |         |                     |                     |
	| node    | multinode-985000 node stop m03       | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:35 PDT | 05 Aug 24 16:35 PDT |
	| node    | multinode-985000 node start          | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:35 PDT | 05 Aug 24 16:36 PDT |
	|         | m03 -v=7 --alsologtostderr           |                  |         |         |                     |                     |
	|---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 16:20:32
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 16:20:32.303800    4640 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:20:32.303980    4640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:20:32.303986    4640 out.go:304] Setting ErrFile to fd 2...
	I0805 16:20:32.303990    4640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:20:32.304163    4640 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:20:32.305609    4640 out.go:298] Setting JSON to false
	I0805 16:20:32.329307    4640 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3003,"bootTime":1722897029,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0805 16:20:32.329400    4640 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:20:32.351877    4640 out.go:177] * [multinode-985000] minikube v1.33.1 on Darwin 14.5
	I0805 16:20:32.392940    4640 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:20:32.393020    4640 notify.go:220] Checking for updates...
	I0805 16:20:32.435775    4640 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:20:32.456783    4640 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0805 16:20:32.477872    4640 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:20:32.499010    4640 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:20:32.519936    4640 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:20:32.541363    4640 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:20:32.571784    4640 out.go:177] * Using the hyperkit driver based on user configuration
	I0805 16:20:32.613992    4640 start.go:297] selected driver: hyperkit
	I0805 16:20:32.614020    4640 start.go:901] validating driver "hyperkit" against <nil>
	I0805 16:20:32.614042    4640 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:20:32.618322    4640 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:20:32.618456    4640 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19373-1122/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0805 16:20:32.627075    4640 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0805 16:20:32.631391    4640 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:20:32.631417    4640 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0805 16:20:32.631452    4640 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 16:20:32.631678    4640 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:20:32.631709    4640 cni.go:84] Creating CNI manager for ""
	I0805 16:20:32.631719    4640 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0805 16:20:32.631730    4640 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0805 16:20:32.631823    4640 start.go:340] cluster config:
	{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:20:32.631925    4640 iso.go:125] acquiring lock: {Name:mk71e8d40232ece83c91dc82184f03ab93aee56e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:20:32.673756    4640 out.go:177] * Starting "multinode-985000" primary control-plane node in "multinode-985000" cluster
	I0805 16:20:32.695001    4640 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:20:32.695088    4640 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0805 16:20:32.695107    4640 cache.go:56] Caching tarball of preloaded images
	I0805 16:20:32.695319    4640 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:20:32.695338    4640 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:20:32.695809    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:20:32.695848    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json: {Name:mk470c2e849a0c86ee251e86e74d9f6dfdb47dad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:32.696485    4640 start.go:360] acquireMachinesLock for multinode-985000: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:20:32.696593    4640 start.go:364] duration metric: took 88.666µs to acquireMachinesLock for "multinode-985000"
	I0805 16:20:32.696646    4640 start.go:93] Provisioning new machine with config: &{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPo
rt:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:20:32.696745    4640 start.go:125] createHost starting for "" (driver="hyperkit")
	I0805 16:20:32.718059    4640 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 16:20:32.718351    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:20:32.718416    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:20:32.728195    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52477
	I0805 16:20:32.728547    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:20:32.728938    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:20:32.728948    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:20:32.729147    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:20:32.729251    4640 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:20:32.729369    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:32.729498    4640 start.go:159] libmachine.API.Create for "multinode-985000" (driver="hyperkit")
	I0805 16:20:32.729521    4640 client.go:168] LocalClient.Create starting
	I0805 16:20:32.729556    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem
	I0805 16:20:32.729608    4640 main.go:141] libmachine: Decoding PEM data...
	I0805 16:20:32.729625    4640 main.go:141] libmachine: Parsing certificate...
	I0805 16:20:32.729685    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem
	I0805 16:20:32.729724    4640 main.go:141] libmachine: Decoding PEM data...
	I0805 16:20:32.729737    4640 main.go:141] libmachine: Parsing certificate...
	I0805 16:20:32.729749    4640 main.go:141] libmachine: Running pre-create checks...
	I0805 16:20:32.729760    4640 main.go:141] libmachine: (multinode-985000) Calling .PreCreateCheck
	I0805 16:20:32.729840    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:32.729974    4640 main.go:141] libmachine: (multinode-985000) Calling .GetConfigRaw
	I0805 16:20:32.739224    4640 main.go:141] libmachine: Creating machine...
	I0805 16:20:32.739247    4640 main.go:141] libmachine: (multinode-985000) Calling .Create
	I0805 16:20:32.739475    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:32.739754    4640 main.go:141] libmachine: (multinode-985000) DBG | I0805 16:20:32.739457    4648 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:20:32.739852    4640 main.go:141] libmachine: (multinode-985000) Downloading /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1122/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 16:20:32.920622    4640 main.go:141] libmachine: (multinode-985000) DBG | I0805 16:20:32.920524    4648 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa...
	I0805 16:20:32.957084    4640 main.go:141] libmachine: (multinode-985000) DBG | I0805 16:20:32.957005    4648 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/multinode-985000.rawdisk...
	I0805 16:20:32.957123    4640 main.go:141] libmachine: (multinode-985000) DBG | Writing magic tar header
	I0805 16:20:32.957134    4640 main.go:141] libmachine: (multinode-985000) DBG | Writing SSH key tar header
	I0805 16:20:32.957531    4640 main.go:141] libmachine: (multinode-985000) DBG | I0805 16:20:32.957490    4648 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000 ...
	I0805 16:20:33.331110    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:33.331140    4640 main.go:141] libmachine: (multinode-985000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid
	I0805 16:20:33.331159    4640 main.go:141] libmachine: (multinode-985000) DBG | Using UUID 3ac698fc-f622-443b-898d-9b152fa64288
	I0805 16:20:33.442582    4640 main.go:141] libmachine: (multinode-985000) DBG | Generated MAC e2:6:14:d2:13:ae
	I0805 16:20:33.442603    4640 main.go:141] libmachine: (multinode-985000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000
	I0805 16:20:33.442636    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3ac698fc-f622-443b-898d-9b152fa64288", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0805 16:20:33.442669    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3ac698fc-f622-443b-898d-9b152fa64288", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0805 16:20:33.442719    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "3ac698fc-f622-443b-898d-9b152fa64288", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/multinode-985000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage,/Users/jenkins/minikube-integration/1937
3-1122/.minikube/machines/multinode-985000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"}
	I0805 16:20:33.442758    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 3ac698fc-f622-443b-898d-9b152fa64288 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/multinode-985000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"
	I0805 16:20:33.442774    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:20:33.445733    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 DEBUG: hyperkit: Pid is 4651
	I0805 16:20:33.446145    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 0
	I0805 16:20:33.446167    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:33.446227    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:33.447073    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:33.447135    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:33.447152    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:33.447186    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:33.447202    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:33.447214    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:33.447222    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:33.447229    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:33.447247    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:33.447269    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:33.447287    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:33.447304    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:33.447321    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:33.453446    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:20:33.506623    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:20:33.507268    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:20:33.507283    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:20:33.507290    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:20:33.507298    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:20:33.891346    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:20:33.891387    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:20:34.006163    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:20:34.006177    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:20:34.006189    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:20:34.006208    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:20:34.007050    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:20:34.007082    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:34 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:20:35.448624    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 1
	I0805 16:20:35.448640    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:35.448724    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:35.449516    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:35.449591    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:35.449607    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:35.449619    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:35.449625    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:35.449648    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:35.449664    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:35.449695    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:35.449711    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:35.449719    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:35.449725    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:35.449731    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:35.449738    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:37.449834    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 2
	I0805 16:20:37.449851    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:37.449867    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:37.450676    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:37.450690    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:37.450697    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:37.450707    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:37.450722    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:37.450733    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:37.450744    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:37.450754    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:37.450771    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:37.450784    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:37.450797    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:37.450809    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:37.450819    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:39.451161    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 3
	I0805 16:20:39.451179    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:39.451277    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:39.452025    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:39.452066    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:39.452089    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:39.452104    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:39.452124    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:39.452141    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:39.452154    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:39.452161    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:39.452167    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:39.452183    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:39.452195    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:39.452202    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:39.452211    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:39.592041    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:39 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:20:39.592070    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:39 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:20:39.592076    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:39 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:20:39.615760    4640 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:20:39 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:20:41.452210    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 4
	I0805 16:20:41.452225    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:41.452325    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:41.453101    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:41.453153    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 11 entries in /var/db/dhcpd_leases!
	I0805 16:20:41.453162    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:20:41.453169    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:20:41.453178    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:20:41.453187    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:20:41.453194    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:20:41.453200    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:20:41.453219    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:20:41.453231    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:20:41.453241    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:20:41.453250    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:20:41.453258    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:20:43.455148    4640 main.go:141] libmachine: (multinode-985000) DBG | Attempt 5
	I0805 16:20:43.455166    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:43.455244    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:43.456059    4640 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:20:43.456103    4640 main.go:141] libmachine: (multinode-985000) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:20:43.456115    4640 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:20:43.456122    4640 main.go:141] libmachine: (multinode-985000) DBG | Found match: e2:6:14:d2:13:ae
	I0805 16:20:43.456127    4640 main.go:141] libmachine: (multinode-985000) DBG | IP: 192.169.0.13
	I0805 16:20:43.456181    4640 main.go:141] libmachine: (multinode-985000) Calling .GetConfigRaw
	I0805 16:20:43.456781    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:43.456879    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:43.456972    4640 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 16:20:43.456985    4640 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:20:43.457082    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:20:43.457144    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:20:43.457907    4640 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 16:20:43.457917    4640 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 16:20:43.457923    4640 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 16:20:43.457927    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:43.458023    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:43.458126    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:43.458255    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:43.458346    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:43.458472    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:43.458676    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:43.458683    4640 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 16:20:44.513424    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:20:44.513443    4640 main.go:141] libmachine: Detecting the provisioner...
	I0805 16:20:44.513452    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:44.513594    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:44.513694    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.513791    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.513876    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:44.513996    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:44.514158    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:44.514165    4640 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 16:20:44.573082    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 16:20:44.573142    4640 main.go:141] libmachine: found compatible host: buildroot
	I0805 16:20:44.573149    4640 main.go:141] libmachine: Provisioning with buildroot...
	I0805 16:20:44.573155    4640 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:20:44.573299    4640 buildroot.go:166] provisioning hostname "multinode-985000"
	I0805 16:20:44.573311    4640 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:20:44.573416    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:44.573499    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:44.573585    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.573680    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.573795    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:44.573922    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:44.574068    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:44.574076    4640 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-985000 && echo "multinode-985000" | sudo tee /etc/hostname
	I0805 16:20:44.637872    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-985000
	
	I0805 16:20:44.637892    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:44.638029    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:44.638132    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.638218    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:44.638297    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:44.638429    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:44.638562    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:44.638582    4640 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-985000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-985000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-985000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:20:44.698340    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:20:44.698360    4640 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:20:44.698377    4640 buildroot.go:174] setting up certificates
	I0805 16:20:44.698389    4640 provision.go:84] configureAuth start
	I0805 16:20:44.698397    4640 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:20:44.698544    4640 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:20:44.698658    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:44.698750    4640 provision.go:143] copyHostCerts
	I0805 16:20:44.698781    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:20:44.698850    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:20:44.698858    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:20:44.699001    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:20:44.699205    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:20:44.699246    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:20:44.699250    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:20:44.699341    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:20:44.699482    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:20:44.699528    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:20:44.699533    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:20:44.699615    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:20:44.699756    4640 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.multinode-985000 san=[127.0.0.1 192.169.0.13 localhost minikube multinode-985000]
	I0805 16:20:45.028860    4640 provision.go:177] copyRemoteCerts
	I0805 16:20:45.028920    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:20:45.028938    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:45.029080    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:45.029180    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.029338    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:45.029452    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:20:45.063652    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:20:45.063724    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:20:45.083743    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:20:45.083800    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0805 16:20:45.103791    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:20:45.103863    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:20:45.123716    4640 provision.go:87] duration metric: took 425.312704ms to configureAuth
	I0805 16:20:45.123731    4640 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:20:45.123881    4640 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:20:45.123894    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:45.124028    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:45.124115    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:45.124206    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.124285    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.124381    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:45.124503    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:45.124632    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:45.124639    4640 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:20:45.176256    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:20:45.176269    4640 buildroot.go:70] root file system type: tmpfs
	I0805 16:20:45.176337    4640 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:20:45.176350    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:45.176482    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:45.176580    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.176695    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.176782    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:45.176911    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:45.177045    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:45.177090    4640 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:20:45.240992    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:20:45.241023    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:45.241166    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:45.241270    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.241382    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:45.241469    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:45.241590    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:45.241743    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:45.241755    4640 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:20:46.765402    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:20:46.765418    4640 main.go:141] libmachine: Checking connection to Docker...
	I0805 16:20:46.765424    4640 main.go:141] libmachine: (multinode-985000) Calling .GetURL
	I0805 16:20:46.765563    4640 main.go:141] libmachine: Docker is up and running!
	I0805 16:20:46.765570    4640 main.go:141] libmachine: Reticulating splines...
	I0805 16:20:46.765575    4640 client.go:171] duration metric: took 14.036043683s to LocalClient.Create
	I0805 16:20:46.765592    4640 start.go:167] duration metric: took 14.036090848s to libmachine.API.Create "multinode-985000"
	I0805 16:20:46.765602    4640 start.go:293] postStartSetup for "multinode-985000" (driver="hyperkit")
	I0805 16:20:46.765609    4640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:20:46.765620    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.765765    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:20:46.765778    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:46.765878    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:46.765972    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.766070    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:46.766168    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:20:46.808597    4640 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:20:46.814840    4640 command_runner.go:130] > NAME=Buildroot
	I0805 16:20:46.814852    4640 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0805 16:20:46.814856    4640 command_runner.go:130] > ID=buildroot
	I0805 16:20:46.814869    4640 command_runner.go:130] > VERSION_ID=2023.02.9
	I0805 16:20:46.814873    4640 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0805 16:20:46.814969    4640 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:20:46.814985    4640 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:20:46.815099    4640 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:20:46.815290    4640 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:20:46.815297    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:20:46.815526    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:20:46.832473    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:20:46.852626    4640 start.go:296] duration metric: took 87.015317ms for postStartSetup
	I0805 16:20:46.852653    4640 main.go:141] libmachine: (multinode-985000) Calling .GetConfigRaw
	I0805 16:20:46.853264    4640 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:20:46.853417    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:20:46.853762    4640 start.go:128] duration metric: took 14.156998155s to createHost
	I0805 16:20:46.853776    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:46.853870    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:46.853964    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.854078    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.854160    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:46.854284    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:20:46.854405    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:20:46.854413    4640 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 16:20:46.906137    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722900047.071906799
	
	I0805 16:20:46.906149    4640 fix.go:216] guest clock: 1722900047.071906799
	I0805 16:20:46.906154    4640 fix.go:229] Guest: 2024-08-05 16:20:47.071906799 -0700 PDT Remote: 2024-08-05 16:20:46.85377 -0700 PDT m=+14.585721958 (delta=218.136799ms)
	I0805 16:20:46.906178    4640 fix.go:200] guest clock delta is within tolerance: 218.136799ms
	I0805 16:20:46.906182    4640 start.go:83] releasing machines lock for "multinode-985000", held for 14.209573761s
	I0805 16:20:46.906200    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.906321    4640 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:20:46.906429    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.906734    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.906832    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:20:46.906917    4640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:20:46.906947    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:46.906977    4640 ssh_runner.go:195] Run: cat /version.json
	I0805 16:20:46.906987    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:20:46.907036    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:46.907080    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:20:46.907105    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.907167    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:20:46.907190    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:46.907251    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:20:46.907285    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:20:46.907353    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:20:46.936969    4640 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0805 16:20:46.937263    4640 ssh_runner.go:195] Run: systemctl --version
	I0805 16:20:46.992747    4640 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0805 16:20:46.993626    4640 command_runner.go:130] > systemd 252 (252)
	I0805 16:20:46.993660    4640 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0805 16:20:46.993799    4640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 16:20:46.998949    4640 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0805 16:20:46.998969    4640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:20:46.999002    4640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:20:47.012276    4640 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0805 16:20:47.012544    4640 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:20:47.012556    4640 start.go:495] detecting cgroup driver to use...
	I0805 16:20:47.012657    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:20:47.027593    4640 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0805 16:20:47.027660    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:20:47.035836    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:20:47.044911    4640 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:20:47.044968    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:20:47.053571    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:20:47.061858    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:20:47.070031    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:20:47.078524    4640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:20:47.087870    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:20:47.096303    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:20:47.104482    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:20:47.112756    4640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:20:47.120033    4640 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0805 16:20:47.120127    4640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:20:47.128644    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:47.220387    4640 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:20:47.239567    4640 start.go:495] detecting cgroup driver to use...
	I0805 16:20:47.239642    4640 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:20:47.254939    4640 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0805 16:20:47.255001    4640 command_runner.go:130] > [Unit]
	I0805 16:20:47.255011    4640 command_runner.go:130] > Description=Docker Application Container Engine
	I0805 16:20:47.255015    4640 command_runner.go:130] > Documentation=https://docs.docker.com
	I0805 16:20:47.255020    4640 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0805 16:20:47.255026    4640 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0805 16:20:47.255030    4640 command_runner.go:130] > StartLimitBurst=3
	I0805 16:20:47.255034    4640 command_runner.go:130] > StartLimitIntervalSec=60
	I0805 16:20:47.255037    4640 command_runner.go:130] > [Service]
	I0805 16:20:47.255041    4640 command_runner.go:130] > Type=notify
	I0805 16:20:47.255055    4640 command_runner.go:130] > Restart=on-failure
	I0805 16:20:47.255063    4640 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0805 16:20:47.255073    4640 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0805 16:20:47.255080    4640 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0805 16:20:47.255088    4640 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0805 16:20:47.255094    4640 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0805 16:20:47.255099    4640 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0805 16:20:47.255112    4640 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0805 16:20:47.255120    4640 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0805 16:20:47.255128    4640 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0805 16:20:47.255134    4640 command_runner.go:130] > ExecStart=
	I0805 16:20:47.255164    4640 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0805 16:20:47.255172    4640 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0805 16:20:47.255182    4640 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0805 16:20:47.255189    4640 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0805 16:20:47.255193    4640 command_runner.go:130] > LimitNOFILE=infinity
	I0805 16:20:47.255196    4640 command_runner.go:130] > LimitNPROC=infinity
	I0805 16:20:47.255200    4640 command_runner.go:130] > LimitCORE=infinity
	I0805 16:20:47.255205    4640 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0805 16:20:47.255209    4640 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0805 16:20:47.255212    4640 command_runner.go:130] > TasksMax=infinity
	I0805 16:20:47.255215    4640 command_runner.go:130] > TimeoutStartSec=0
	I0805 16:20:47.255220    4640 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0805 16:20:47.255225    4640 command_runner.go:130] > Delegate=yes
	I0805 16:20:47.255230    4640 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0805 16:20:47.255233    4640 command_runner.go:130] > KillMode=process
	I0805 16:20:47.255236    4640 command_runner.go:130] > [Install]
	I0805 16:20:47.255259    4640 command_runner.go:130] > WantedBy=multi-user.target
	I0805 16:20:47.255324    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:20:47.269909    4640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:20:47.286027    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:20:47.296365    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:20:47.306405    4640 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:20:47.369760    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:20:47.379998    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:20:47.394696    4640 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0805 16:20:47.394951    4640 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:20:47.397850    4640 command_runner.go:130] > /usr/bin/cri-dockerd
	I0805 16:20:47.398038    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:20:47.406063    4640 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:20:47.419537    4640 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:20:47.514227    4640 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:20:47.637079    4640 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:20:47.637156    4640 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:20:47.651314    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:47.748259    4640 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:20:50.076345    4640 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.32806615s)
	I0805 16:20:50.076407    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0805 16:20:50.086580    4640 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0805 16:20:50.099944    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:20:50.110410    4640 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0805 16:20:50.206329    4640 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0805 16:20:50.317239    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:50.417670    4640 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0805 16:20:50.431617    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:20:50.443305    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:50.555307    4640 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0805 16:20:50.610408    4640 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0805 16:20:50.610481    4640 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0805 16:20:50.614751    4640 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0805 16:20:50.614762    4640 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0805 16:20:50.614767    4640 command_runner.go:130] > Device: 0,22	Inode: 806         Links: 1
	I0805 16:20:50.614772    4640 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0805 16:20:50.614775    4640 command_runner.go:130] > Access: 2024-08-05 23:20:50.735793184 +0000
	I0805 16:20:50.614784    4640 command_runner.go:130] > Modify: 2024-08-05 23:20:50.735793184 +0000
	I0805 16:20:50.614789    4640 command_runner.go:130] > Change: 2024-08-05 23:20:50.736793062 +0000
	I0805 16:20:50.614792    4640 command_runner.go:130] >  Birth: -
	I0805 16:20:50.614829    4640 start.go:563] Will wait 60s for crictl version
	I0805 16:20:50.614890    4640 ssh_runner.go:195] Run: which crictl
	I0805 16:20:50.617807    4640 command_runner.go:130] > /usr/bin/crictl
	I0805 16:20:50.617933    4640 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 16:20:50.644026    4640 command_runner.go:130] > Version:  0.1.0
	I0805 16:20:50.644070    4640 command_runner.go:130] > RuntimeName:  docker
	I0805 16:20:50.644117    4640 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0805 16:20:50.644195    4640 command_runner.go:130] > RuntimeApiVersion:  v1
	I0805 16:20:50.645396    4640 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0805 16:20:50.645460    4640 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:20:50.661131    4640 command_runner.go:130] > 27.1.1
	I0805 16:20:50.662194    4640 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:20:50.677860    4640 command_runner.go:130] > 27.1.1
	I0805 16:20:50.700872    4640 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0805 16:20:50.700922    4640 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:20:50.701316    4640 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0805 16:20:50.706154    4640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:20:50.715610    4640 kubeadm.go:883] updating cluster {Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 16:20:50.715677    4640 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:20:50.715736    4640 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:20:50.733572    4640 docker.go:685] Got preloaded images: 
	I0805 16:20:50.733584    4640 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.3 wasn't preloaded
	I0805 16:20:50.733634    4640 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 16:20:50.741005    4640 command_runner.go:139] > {"Repositories":{}}
	I0805 16:20:50.741090    4640 ssh_runner.go:195] Run: which lz4
	I0805 16:20:50.744527    4640 command_runner.go:130] > /usr/bin/lz4
	I0805 16:20:50.744558    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0805 16:20:50.744692    4640 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0805 16:20:50.747718    4640 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 16:20:50.747836    4640 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0805 16:20:50.747851    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359612007 bytes)
	I0805 16:20:51.865752    4640 docker.go:649] duration metric: took 1.121114736s to copy over tarball
	I0805 16:20:51.865833    4640 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0805 16:20:54.241811    4640 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.375959074s)
	I0805 16:20:54.241825    4640 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0805 16:20:54.267125    4640 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0805 16:20:54.275283    4640 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.3":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c":"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.3":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7":"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.3":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65":"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d2
89d99da794784d1"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.3":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4":"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0805 16:20:54.275373    4640 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0805 16:20:54.288931    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:54.386395    4640 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:20:56.795159    4640 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.408741228s)
	I0805 16:20:56.795248    4640 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:20:56.808093    4640 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0805 16:20:56.808107    4640 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0805 16:20:56.808111    4640 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0805 16:20:56.808116    4640 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0805 16:20:56.808120    4640 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0805 16:20:56.808123    4640 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0805 16:20:56.808128    4640 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0805 16:20:56.808135    4640 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:20:56.809018    4640 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0805 16:20:56.809035    4640 cache_images.go:84] Images are preloaded, skipping loading
	I0805 16:20:56.809048    4640 kubeadm.go:934] updating node { 192.169.0.13 8443 v1.30.3 docker true true} ...
	I0805 16:20:56.809127    4640 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-985000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 16:20:56.809195    4640 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0805 16:20:56.847007    4640 command_runner.go:130] > cgroupfs
	I0805 16:20:56.847610    4640 cni.go:84] Creating CNI manager for ""
	I0805 16:20:56.847620    4640 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 16:20:56.847630    4640 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 16:20:56.847650    4640 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.13 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-985000 NodeName:multinode-985000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 16:20:56.847744    4640 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-985000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 16:20:56.847807    4640 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 16:20:56.855919    4640 command_runner.go:130] > kubeadm
	I0805 16:20:56.855931    4640 command_runner.go:130] > kubectl
	I0805 16:20:56.855934    4640 command_runner.go:130] > kubelet
	I0805 16:20:56.855959    4640 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 16:20:56.856010    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 16:20:56.863284    4640 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0805 16:20:56.876753    4640 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 16:20:56.890292    4640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0805 16:20:56.904628    4640 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I0805 16:20:56.907711    4640 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:20:56.917108    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:20:57.013172    4640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:20:57.028650    4640 certs.go:68] Setting up /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000 for IP: 192.169.0.13
	I0805 16:20:57.028663    4640 certs.go:194] generating shared ca certs ...
	I0805 16:20:57.028674    4640 certs.go:226] acquiring lock for ca certs: {Name:mkb83e058d89c7d4e66f4136f377a3c305b13735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.028863    4640 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key
	I0805 16:20:57.028935    4640 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key
	I0805 16:20:57.028946    4640 certs.go:256] generating profile certs ...
	I0805 16:20:57.028995    4640 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key
	I0805 16:20:57.029007    4640 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt with IP's: []
	I0805 16:20:57.088127    4640 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt ...
	I0805 16:20:57.088142    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt: {Name:mkb7087fa165ae496621b10df42dfd2f8603360a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.088531    4640 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key ...
	I0805 16:20:57.088540    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key: {Name:mk37e627de9c39a2300d317d721ebf92a202a17e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.088775    4640 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec
	I0805 16:20:57.088790    4640 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt.5b7978ec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.13]
	I0805 16:20:57.189318    4640 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt.5b7978ec ...
	I0805 16:20:57.189336    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt.5b7978ec: {Name:mkb4501af4f6db766eb719de2f42fc564a23d2d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.189653    4640 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec ...
	I0805 16:20:57.189669    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec: {Name:mke641ddecfc5629bb592a5b6321d446ed3b31bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.189903    4640 certs.go:381] copying /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt.5b7978ec -> /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt
	I0805 16:20:57.190140    4640 certs.go:385] copying /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec -> /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key
	I0805 16:20:57.190318    4640 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key
	I0805 16:20:57.190336    4640 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt with IP's: []
	I0805 16:20:57.386717    4640 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt ...
	I0805 16:20:57.386733    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt: {Name:mk486344c8c5b8383e5349f68a995b553e8d31c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.387043    4640 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key ...
	I0805 16:20:57.387052    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key: {Name:mk2b24e1a5e962e12395adf21e4f6ad64901ee0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:20:57.387278    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 16:20:57.387306    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 16:20:57.387325    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 16:20:57.387349    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 16:20:57.387368    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 16:20:57.387391    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 16:20:57.387411    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 16:20:57.387432    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 16:20:57.387531    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem (1338 bytes)
	W0805 16:20:57.387583    4640 certs.go:480] ignoring /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678_empty.pem, impossibly tiny 0 bytes
	I0805 16:20:57.387591    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 16:20:57.387621    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem (1082 bytes)
	I0805 16:20:57.387656    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem (1123 bytes)
	I0805 16:20:57.387684    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem (1675 bytes)
	I0805 16:20:57.387747    4640 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:20:57.387781    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem -> /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.387803    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.387822    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.388188    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 16:20:57.408800    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 16:20:57.429927    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 16:20:57.449924    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0805 16:20:57.470736    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0805 16:20:57.490564    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 16:20:57.511342    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 16:20:57.531190    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 16:20:57.551984    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem --> /usr/share/ca-certificates/1678.pem (1338 bytes)
	I0805 16:20:57.571601    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /usr/share/ca-certificates/16782.pem (1708 bytes)
	I0805 16:20:57.592369    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 16:20:57.611866    4640 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 16:20:57.626527    4640 ssh_runner.go:195] Run: openssl version
	I0805 16:20:57.630504    4640 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0805 16:20:57.630711    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1678.pem && ln -fs /usr/share/ca-certificates/1678.pem /etc/ssl/certs/1678.pem"
	I0805 16:20:57.638913    4640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.642115    4640 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  5 22:58 /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.642280    4640 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 22:58 /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.642315    4640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1678.pem
	I0805 16:20:57.646345    4640 command_runner.go:130] > 51391683
	I0805 16:20:57.646544    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1678.pem /etc/ssl/certs/51391683.0"
	I0805 16:20:57.654953    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16782.pem && ln -fs /usr/share/ca-certificates/16782.pem /etc/ssl/certs/16782.pem"
	I0805 16:20:57.663842    4640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.667242    4640 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  5 22:58 /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.667258    4640 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 22:58 /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.667300    4640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16782.pem
	I0805 16:20:57.671438    4640 command_runner.go:130] > 3ec20f2e
	I0805 16:20:57.671648    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16782.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 16:20:57.679692    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 16:20:57.688061    4640 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.691411    4640 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.691493    4640 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.691531    4640 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:20:57.695572    4640 command_runner.go:130] > b5213941
	I0805 16:20:57.695754    4640 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 16:20:57.704703    4640 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 16:20:57.707752    4640 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 16:20:57.707872    4640 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0805 16:20:57.707921    4640 kubeadm.go:392] StartCluster: {Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:20:57.708054    4640 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 16:20:57.720408    4640 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 16:20:57.731114    4640 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0805 16:20:57.731128    4640 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0805 16:20:57.731133    4640 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0805 16:20:57.731194    4640 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 16:20:57.739645    4640 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 16:20:57.751095    4640 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0805 16:20:57.751108    4640 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0805 16:20:57.751113    4640 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0805 16:20:57.751120    4640 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 16:20:57.751266    4640 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 16:20:57.751273    4640 kubeadm.go:157] found existing configuration files:
	
	I0805 16:20:57.751324    4640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 16:20:57.759086    4640 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 16:20:57.759185    4640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 16:20:57.759233    4640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 16:20:57.769060    4640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 16:20:57.778103    4640 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 16:20:57.778143    4640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 16:20:57.778190    4640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 16:20:57.786612    4640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 16:20:57.794733    4640 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 16:20:57.794754    4640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 16:20:57.794796    4640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 16:20:57.802671    4640 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 16:20:57.810242    4640 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 16:20:57.810264    4640 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 16:20:57.810299    4640 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 16:20:57.818339    4640 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0805 16:20:57.890449    4640 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0805 16:20:57.890461    4640 command_runner.go:130] > [init] Using Kubernetes version: v1.30.3
	I0805 16:20:57.890501    4640 kubeadm.go:310] [preflight] Running pre-flight checks
	I0805 16:20:57.890507    4640 command_runner.go:130] > [preflight] Running pre-flight checks
	I0805 16:20:57.984851    4640 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 16:20:57.984855    4640 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0805 16:20:57.984956    4640 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 16:20:57.984962    4640 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0805 16:20:57.985041    4640 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 16:20:57.985038    4640 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0805 16:20:58.152965    4640 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 16:20:58.152995    4640 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 16:20:58.175785    4640 out.go:204]   - Generating certificates and keys ...
	I0805 16:20:58.175840    4640 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0805 16:20:58.175851    4640 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0805 16:20:58.175914    4640 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0805 16:20:58.175920    4640 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0805 16:20:58.229002    4640 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0805 16:20:58.229016    4640 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0805 16:20:58.322701    4640 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0805 16:20:58.322717    4640 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0805 16:20:58.394063    4640 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0805 16:20:58.394077    4640 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0805 16:20:58.601975    4640 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0805 16:20:58.601995    4640 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0805 16:20:58.821056    4640 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0805 16:20:58.821065    4640 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0805 16:20:58.821204    4640 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-985000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0805 16:20:58.821214    4640 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-985000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0805 16:20:59.150811    4640 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0805 16:20:59.150817    4640 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0805 16:20:59.151036    4640 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-985000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0805 16:20:59.151046    4640 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-985000] and IPs [192.169.0.13 127.0.0.1 ::1]
	I0805 16:20:59.206073    4640 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0805 16:20:59.206088    4640 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0805 16:20:59.294956    4640 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0805 16:20:59.294966    4640 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0805 16:20:59.348591    4640 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0805 16:20:59.348602    4640 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0805 16:20:59.348788    4640 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 16:20:59.348797    4640 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 16:20:59.511379    4640 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 16:20:59.511395    4640 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 16:20:59.789652    4640 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 16:20:59.789666    4640 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 16:20:59.965508    4640 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 16:20:59.965517    4640 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 16:21:00.208268    4640 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 16:21:00.208284    4640 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 16:21:00.402575    4640 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 16:21:00.402582    4640 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 16:21:00.409122    4640 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 16:21:00.409137    4640 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 16:21:00.410639    4640 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 16:21:00.410652    4640 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 16:21:00.430944    4640 out.go:204]   - Booting up control plane ...
	I0805 16:21:00.431017    4640 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 16:21:00.431032    4640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 16:21:00.431106    4640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 16:21:00.431106    4640 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 16:21:00.431174    4640 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 16:21:00.431182    4640 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 16:21:00.431274    4640 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 16:21:00.431286    4640 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 16:21:00.431361    4640 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 16:21:00.431369    4640 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 16:21:00.431399    4640 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0805 16:21:00.431405    4640 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0805 16:21:00.540991    4640 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 16:21:00.541004    4640 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0805 16:21:00.541076    4640 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 16:21:00.541081    4640 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0805 16:21:01.042556    4640 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.719164ms
	I0805 16:21:01.042573    4640 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.719164ms
	I0805 16:21:01.042632    4640 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 16:21:01.042639    4640 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0805 16:21:05.541995    4640 kubeadm.go:310] [api-check] The API server is healthy after 4.502407968s
	I0805 16:21:05.542014    4640 command_runner.go:130] > [api-check] The API server is healthy after 4.502407968s
	I0805 16:21:05.551474    4640 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 16:21:05.551486    4640 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0805 16:21:05.558278    4640 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 16:21:05.558284    4640 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0805 16:21:05.572116    4640 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0805 16:21:05.572130    4640 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0805 16:21:05.572281    4640 kubeadm.go:310] [mark-control-plane] Marking the node multinode-985000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 16:21:05.572292    4640 command_runner.go:130] > [mark-control-plane] Marking the node multinode-985000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0805 16:21:05.579214    4640 kubeadm.go:310] [bootstrap-token] Using token: 0mwls8.ribzsy6ooov2flu0
	I0805 16:21:05.579225    4640 command_runner.go:130] > [bootstrap-token] Using token: 0mwls8.ribzsy6ooov2flu0
	I0805 16:21:05.613851    4640 out.go:204]   - Configuring RBAC rules ...
	I0805 16:21:05.613974    4640 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 16:21:05.613988    4640 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0805 16:21:05.655317    4640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 16:21:05.655329    4640 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0805 16:21:05.659733    4640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 16:21:05.659737    4640 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0805 16:21:05.661608    4640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 16:21:05.661619    4640 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0805 16:21:05.663605    4640 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 16:21:05.663612    4640 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0805 16:21:05.665771    4640 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 16:21:05.665778    4640 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0805 16:21:05.947572    4640 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 16:21:05.947585    4640 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0805 16:21:06.357765    4640 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0805 16:21:06.357776    4640 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0805 16:21:06.946930    4640 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0805 16:21:06.946942    4640 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0805 16:21:06.947937    4640 kubeadm.go:310] 
	I0805 16:21:06.947989    4640 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0805 16:21:06.947996    4640 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0805 16:21:06.948000    4640 kubeadm.go:310] 
	I0805 16:21:06.948071    4640 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0805 16:21:06.948080    4640 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0805 16:21:06.948088    4640 kubeadm.go:310] 
	I0805 16:21:06.948121    4640 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0805 16:21:06.948125    4640 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0805 16:21:06.948179    4640 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 16:21:06.948187    4640 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0805 16:21:06.948229    4640 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 16:21:06.948234    4640 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0805 16:21:06.948237    4640 kubeadm.go:310] 
	I0805 16:21:06.948284    4640 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0805 16:21:06.948302    4640 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0805 16:21:06.948309    4640 kubeadm.go:310] 
	I0805 16:21:06.948354    4640 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 16:21:06.948367    4640 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0805 16:21:06.948375    4640 kubeadm.go:310] 
	I0805 16:21:06.948414    4640 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0805 16:21:06.948418    4640 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0805 16:21:06.948479    4640 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 16:21:06.948488    4640 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0805 16:21:06.948558    4640 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 16:21:06.948564    4640 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0805 16:21:06.948570    4640 kubeadm.go:310] 
	I0805 16:21:06.948633    4640 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0805 16:21:06.948638    4640 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0805 16:21:06.948701    4640 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0805 16:21:06.948708    4640 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0805 16:21:06.948715    4640 kubeadm.go:310] 
	I0805 16:21:06.948788    4640 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0mwls8.ribzsy6ooov2flu0 \
	I0805 16:21:06.948795    4640 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 0mwls8.ribzsy6ooov2flu0 \
	I0805 16:21:06.948879    4640 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:524477c6809305b6c0c2d082a15767bdfc04953bf05f4ba28f6a5db30aba8adf \
	I0805 16:21:06.948886    4640 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:524477c6809305b6c0c2d082a15767bdfc04953bf05f4ba28f6a5db30aba8adf \
	I0805 16:21:06.948905    4640 kubeadm.go:310] 	--control-plane 
	I0805 16:21:06.948911    4640 command_runner.go:130] > 	--control-plane 
	I0805 16:21:06.948916    4640 kubeadm.go:310] 
	I0805 16:21:06.948980    4640 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0805 16:21:06.948984    4640 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0805 16:21:06.948987    4640 kubeadm.go:310] 
	I0805 16:21:06.949052    4640 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0mwls8.ribzsy6ooov2flu0 \
	I0805 16:21:06.949057    4640 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 0mwls8.ribzsy6ooov2flu0 \
	I0805 16:21:06.949136    4640 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:524477c6809305b6c0c2d082a15767bdfc04953bf05f4ba28f6a5db30aba8adf 
	I0805 16:21:06.949141    4640 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:524477c6809305b6c0c2d082a15767bdfc04953bf05f4ba28f6a5db30aba8adf 
	I0805 16:21:06.949613    4640 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 16:21:06.949621    4640 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0805 16:21:06.949644    4640 cni.go:84] Creating CNI manager for ""
	I0805 16:21:06.949649    4640 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0805 16:21:06.972147    4640 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0805 16:21:07.030449    4640 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0805 16:21:07.036220    4640 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0805 16:21:07.036233    4640 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0805 16:21:07.036239    4640 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0805 16:21:07.036249    4640 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0805 16:21:07.036254    4640 command_runner.go:130] > Access: 2024-08-05 23:20:43.694299549 +0000
	I0805 16:21:07.036259    4640 command_runner.go:130] > Modify: 2024-07-29 16:10:03.000000000 +0000
	I0805 16:21:07.036264    4640 command_runner.go:130] > Change: 2024-08-05 23:20:41.058596444 +0000
	I0805 16:21:07.036266    4640 command_runner.go:130] >  Birth: -
	I0805 16:21:07.036368    4640 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0805 16:21:07.036375    4640 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0805 16:21:07.050414    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0805 16:21:07.243070    4640 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0805 16:21:07.246445    4640 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0805 16:21:07.250670    4640 command_runner.go:130] > serviceaccount/kindnet created
	I0805 16:21:07.255971    4640 command_runner.go:130] > daemonset.apps/kindnet created
	I0805 16:21:07.257424    4640 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 16:21:07.257500    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-985000 minikube.k8s.io/updated_at=2024_08_05T16_21_07_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4 minikube.k8s.io/name=multinode-985000 minikube.k8s.io/primary=true
	I0805 16:21:07.257502    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:07.266956    4640 command_runner.go:130] > -16
	I0805 16:21:07.267023    4640 ops.go:34] apiserver oom_adj: -16
	I0805 16:21:07.390396    4640 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0805 16:21:07.392070    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:07.400579    4640 command_runner.go:130] > node/multinode-985000 labeled
	I0805 16:21:07.456213    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:07.893323    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:07.956622    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:08.392391    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:08.450793    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:08.892411    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:08.950456    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:09.393238    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:09.450291    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:09.892156    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:09.951159    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:10.393019    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:10.451734    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:10.893100    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:10.954360    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:11.393009    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:11.452879    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:11.894187    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:11.953480    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:12.392194    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:12.452444    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:12.894265    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:12.955367    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:13.392882    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:13.455680    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:13.892568    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:13.950195    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:14.393254    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:14.452940    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:14.892187    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:14.948447    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:15.392762    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:15.451815    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:15.892531    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:15.952781    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:16.393008    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:16.454659    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:16.892423    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:16.957989    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:17.392489    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:17.452653    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:17.892453    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:17.953809    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:18.392692    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:18.450726    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:18.893940    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:18.957266    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:19.393402    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:19.452345    4640 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0805 16:21:19.892761    4640 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0805 16:21:19.952524    4640 command_runner.go:130] > NAME      SECRETS   AGE
	I0805 16:21:19.952537    4640 command_runner.go:130] > default   0         1s
	I0805 16:21:19.952551    4640 kubeadm.go:1113] duration metric: took 12.695106906s to wait for elevateKubeSystemPrivileges
	I0805 16:21:19.952568    4640 kubeadm.go:394] duration metric: took 22.244643678s to StartCluster
	I0805 16:21:19.952584    4640 settings.go:142] acquiring lock: {Name:mk564a817a54ecf2aef16a4d2309e85208c0231f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:21:19.952678    4640 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:21:19.953130    4640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/kubeconfig: {Name:mk2a0d8b4d330b3c26432fc65d015ddf98a9cc93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:21:19.953387    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0805 16:21:19.953391    4640 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:21:19.953437    4640 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 16:21:19.953474    4640 addons.go:69] Setting storage-provisioner=true in profile "multinode-985000"
	I0805 16:21:19.953501    4640 addons.go:234] Setting addon storage-provisioner=true in "multinode-985000"
	I0805 16:21:19.953507    4640 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:21:19.953501    4640 addons.go:69] Setting default-storageclass=true in profile "multinode-985000"
	I0805 16:21:19.953520    4640 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:21:19.953542    4640 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-985000"
	I0805 16:21:19.953772    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.953787    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.953870    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.953897    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.962985    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52500
	I0805 16:21:19.963341    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52502
	I0805 16:21:19.963365    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.963645    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.963722    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.963735    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.963997    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.964004    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.964027    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.964249    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.964372    4640 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:21:19.964430    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.964458    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:19.964465    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.964535    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:21:19.966651    4640 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:21:19.966874    4640 kapi.go:59] client config for multinode-985000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xed05060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:21:19.967275    4640 cert_rotation.go:137] Starting client certificate rotation controller
	I0805 16:21:19.967411    4640 addons.go:234] Setting addon default-storageclass=true in "multinode-985000"
	I0805 16:21:19.967434    4640 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:21:19.967665    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.967688    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.973226    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52504
	I0805 16:21:19.973568    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.973922    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.973942    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.974163    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.974282    4640 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:21:19.974363    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:19.974444    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:21:19.975405    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:21:19.975491    4640 out.go:177] * Verifying Kubernetes components...
	I0805 16:21:19.976182    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52506
	I0805 16:21:19.976461    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.976795    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.976812    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.976999    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.977392    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:19.977409    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:19.986027    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52508
	I0805 16:21:19.986361    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:19.986712    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:19.986741    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:19.986959    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:19.987071    4640 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:21:19.987149    4640 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:19.987227    4640 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:21:19.988179    4640 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:21:19.988299    4640 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0805 16:21:19.988307    4640 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0805 16:21:19.988315    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:21:19.988395    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:21:19.988484    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:21:19.988568    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:21:19.988639    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:21:20.032241    4640 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:21:20.032361    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:21:20.069496    4640 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 16:21:20.069510    4640 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0805 16:21:20.069530    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:21:20.069717    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:21:20.069824    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:21:20.069935    4640 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:21:20.070041    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:21:20.084762    4640 command_runner.go:130] > apiVersion: v1
	I0805 16:21:20.084775    4640 command_runner.go:130] > data:
	I0805 16:21:20.084779    4640 command_runner.go:130] >   Corefile: |
	I0805 16:21:20.084782    4640 command_runner.go:130] >     .:53 {
	I0805 16:21:20.084785    4640 command_runner.go:130] >         errors
	I0805 16:21:20.084790    4640 command_runner.go:130] >         health {
	I0805 16:21:20.084794    4640 command_runner.go:130] >            lameduck 5s
	I0805 16:21:20.084796    4640 command_runner.go:130] >         }
	I0805 16:21:20.084812    4640 command_runner.go:130] >         ready
	I0805 16:21:20.084822    4640 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0805 16:21:20.084829    4640 command_runner.go:130] >            pods insecure
	I0805 16:21:20.084833    4640 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0805 16:21:20.084841    4640 command_runner.go:130] >            ttl 30
	I0805 16:21:20.084853    4640 command_runner.go:130] >         }
	I0805 16:21:20.084863    4640 command_runner.go:130] >         prometheus :9153
	I0805 16:21:20.084868    4640 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0805 16:21:20.084880    4640 command_runner.go:130] >            max_concurrent 1000
	I0805 16:21:20.084884    4640 command_runner.go:130] >         }
	I0805 16:21:20.084887    4640 command_runner.go:130] >         cache 30
	I0805 16:21:20.084898    4640 command_runner.go:130] >         loop
	I0805 16:21:20.084902    4640 command_runner.go:130] >         reload
	I0805 16:21:20.084905    4640 command_runner.go:130] >         loadbalance
	I0805 16:21:20.084908    4640 command_runner.go:130] >     }
	I0805 16:21:20.084911    4640 command_runner.go:130] > kind: ConfigMap
	I0805 16:21:20.084914    4640 command_runner.go:130] > metadata:
	I0805 16:21:20.084921    4640 command_runner.go:130] >   creationTimestamp: "2024-08-05T23:21:06Z"
	I0805 16:21:20.084926    4640 command_runner.go:130] >   name: coredns
	I0805 16:21:20.084929    4640 command_runner.go:130] >   namespace: kube-system
	I0805 16:21:20.084933    4640 command_runner.go:130] >   resourceVersion: "266"
	I0805 16:21:20.084937    4640 command_runner.go:130] >   uid: 5057af03-8824-4e67-a4b6-ef90c1ded7ce
	I0805 16:21:20.085056    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0805 16:21:20.184335    4640 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:21:20.203408    4640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0805 16:21:20.278639    4640 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0805 16:21:20.507141    4640 command_runner.go:130] > configmap/coredns replaced
	I0805 16:21:20.511660    4640 start.go:971] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0805 16:21:20.511929    4640 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:21:20.511932    4640 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:21:20.512124    4640 kapi.go:59] client config for multinode-985000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xed05060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:21:20.512125    4640 kapi.go:59] client config for multinode-985000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xed05060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:21:20.512341    4640 node_ready.go:35] waiting up to 6m0s for node "multinode-985000" to be "Ready" ...
	I0805 16:21:20.512409    4640 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0805 16:21:20.512416    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.512423    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.512424    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:20.512428    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.512430    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.512438    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.512446    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.520076    4640 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0805 16:21:20.520087    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.520092    4640 round_trippers.go:580]     Audit-Id: 304f14c4-a466-4fb6-b401-b28f4df4dfa1
	I0805 16:21:20.520095    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.520103    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.520107    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.520111    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.520113    4640 round_trippers.go:580]     Content-Length: 291
	I0805 16:21:20.520117    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.521443    4640 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0805 16:21:20.521456    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.521464    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.521474    4640 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7bdcac2f-ecae-4bb5-9dd4-4f2479d63a63","resourceVersion":"381","creationTimestamp":"2024-08-05T23:21:06Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0805 16:21:20.521479    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.521487    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.521502    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.521511    4640 round_trippers.go:580]     Audit-Id: bcd9e393-6b08-4ffb-a73b-6e7c430f0212
	I0805 16:21:20.521518    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.521831    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:20.521865    4640 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7bdcac2f-ecae-4bb5-9dd4-4f2479d63a63","resourceVersion":"381","creationTimestamp":"2024-08-05T23:21:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0805 16:21:20.521904    4640 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0805 16:21:20.521914    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.521921    4640 round_trippers.go:473]     Content-Type: application/json
	I0805 16:21:20.521930    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.521935    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.530726    4640 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0805 16:21:20.530739    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.530744    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.530748    4640 round_trippers.go:580]     Content-Length: 291
	I0805 16:21:20.530751    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.530754    4640 round_trippers.go:580]     Audit-Id: ba15a3b2-b69b-473e-a331-81e01385ad47
	I0805 16:21:20.530756    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.530758    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.530761    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.530773    4640 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7bdcac2f-ecae-4bb5-9dd4-4f2479d63a63","resourceVersion":"383","creationTimestamp":"2024-08-05T23:21:06Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0805 16:21:20.588534    4640 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0805 16:21:20.588563    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.588570    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.588737    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.588752    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.588765    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.588764    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.588772    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.588919    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.588920    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.588931    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.589012    4640 round_trippers.go:463] GET https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses
	I0805 16:21:20.589020    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.589028    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.589034    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.597496    4640 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0805 16:21:20.597508    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.597513    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.597518    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.597521    4640 round_trippers.go:580]     Content-Length: 1273
	I0805 16:21:20.597523    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.597525    4640 round_trippers.go:580]     Audit-Id: d7394cfc-1eb3-4623-8a7f-a5088a0398c8
	I0805 16:21:20.597527    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.597530    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.597844    4640 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"391"},"items":[{"metadata":{"name":"standard","uid":"34b9c98b-1b12-420a-8576-fd00c496f57b","resourceVersion":"387","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0805 16:21:20.598117    4640 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"34b9c98b-1b12-420a-8576-fd00c496f57b","resourceVersion":"387","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0805 16:21:20.598145    4640 round_trippers.go:463] PUT https://192.169.0.13:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0805 16:21:20.598150    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:20.598157    4640 round_trippers.go:473]     Content-Type: application/json
	I0805 16:21:20.598166    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:20.598171    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:20.619819    4640 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0805 16:21:20.619836    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:20.619842    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:20.619846    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:20.619849    4640 round_trippers.go:580]     Content-Length: 1220
	I0805 16:21:20.619852    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:20 GMT
	I0805 16:21:20.619855    4640 round_trippers.go:580]     Audit-Id: 299d4cc8-0cb5-4dd5-80b3-5d54592ecd90
	I0805 16:21:20.619859    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:20.619861    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:20.619898    4640 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"34b9c98b-1b12-420a-8576-fd00c496f57b","resourceVersion":"387","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0805 16:21:20.619983    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.619992    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.620141    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.620153    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.620166    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.750372    4640 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0805 16:21:20.753871    4640 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0805 16:21:20.759257    4640 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0805 16:21:20.767575    4640 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0805 16:21:20.774745    4640 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0805 16:21:20.786454    4640 command_runner.go:130] > pod/storage-provisioner created
	I0805 16:21:20.787838    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.787851    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.788087    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.788087    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.788098    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.788109    4640 main.go:141] libmachine: Making call to close driver server
	I0805 16:21:20.788117    4640 main.go:141] libmachine: (multinode-985000) Calling .Close
	I0805 16:21:20.788261    4640 main.go:141] libmachine: Successfully made call to close driver server
	I0805 16:21:20.788280    4640 main.go:141] libmachine: Making call to close connection to plugin binary
	I0805 16:21:20.788280    4640 main.go:141] libmachine: (multinode-985000) DBG | Closing plugin on server side
	I0805 16:21:20.811467    4640 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0805 16:21:20.871433    4640 addons.go:510] duration metric: took 917.995637ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0805 16:21:21.014507    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:21.014532    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:21.014545    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:21.014553    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:21.014605    4640 round_trippers.go:463] GET https://192.169.0.13:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0805 16:21:21.014619    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:21.014631    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:21.014638    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:21.017465    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:21.017464    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:21.017480    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:21.017492    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:21.017492    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:21.017496    4640 round_trippers.go:580]     Content-Length: 291
	I0805 16:21:21.017502    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:21 GMT
	I0805 16:21:21.017504    4640 round_trippers.go:580]     Audit-Id: fb264fed-80ee-469b-a34e-7b1e8460f94b
	I0805 16:21:21.017506    4640 round_trippers.go:580]     Audit-Id: c9362211-8dfc-4385-87db-76c6486df53e
	I0805 16:21:21.017512    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:21.017513    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:21.017518    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:21.017519    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:21.017522    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:21.017524    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:21.017529    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:21.017545    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:21 GMT
	I0805 16:21:21.017616    4640 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"7bdcac2f-ecae-4bb5-9dd4-4f2479d63a63","resourceVersion":"395","creationTimestamp":"2024-08-05T23:21:06Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0805 16:21:21.017684    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:21.017735    4640 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-985000" context rescaled to 1 replicas
	I0805 16:21:21.514170    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:21.514200    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:21.514219    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:21.514226    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:21.516804    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:21.516819    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:21.516826    4640 round_trippers.go:580]     Audit-Id: 9396255c-231d-48cb-a53f-22663307b969
	I0805 16:21:21.516830    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:21.516834    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:21.516839    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:21.516849    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:21.516854    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:21 GMT
	I0805 16:21:21.516951    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:22.013275    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:22.013299    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:22.013311    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:22.013319    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:22.016138    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:22.016155    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:22.016163    4640 round_trippers.go:580]     Audit-Id: cc869aef-9ab4-4a7f-8835-cce2afa76dd9
	I0805 16:21:22.016168    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:22.016175    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:22.016182    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:22.016187    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:22.016193    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:22 GMT
	I0805 16:21:22.016497    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:22.512546    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:22.512561    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:22.512567    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:22.512572    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:22.515381    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:22.515393    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:22.515401    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:22.515407    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:22.515412    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:22.515416    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:22 GMT
	I0805 16:21:22.515420    4640 round_trippers.go:580]     Audit-Id: e7d470a0-7df5-4d85-9bb5-cbf15cfa989f
	I0805 16:21:22.515423    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:22.515634    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:22.515838    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:23.012594    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:23.012606    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:23.012612    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:23.012616    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:23.014085    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:23.014095    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:23.014101    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:23.014104    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:23.014107    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:23.014109    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:23.014113    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:23 GMT
	I0805 16:21:23.014116    4640 round_trippers.go:580]     Audit-Id: e12d5034-3bd9-498b-844e-12133805ded9
	I0805 16:21:23.014306    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:23.513150    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:23.513163    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:23.513168    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:23.513172    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:23.514595    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:23.514604    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:23.514610    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:23.514614    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:23.514617    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:23.514619    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:23.514622    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:23 GMT
	I0805 16:21:23.514635    4640 round_trippers.go:580]     Audit-Id: 2bc52e3b-1575-453f-87fa-51f4301a9426
	I0805 16:21:23.514871    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:24.012814    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:24.012826    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:24.012832    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:24.012835    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:24.014366    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:24.014379    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:24.014384    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:24.014388    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:24.014406    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:24.014411    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:24.014414    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:24 GMT
	I0805 16:21:24.014417    4640 round_trippers.go:580]     Audit-Id: f14d8611-e5e1-45fe-92f3-95559148c71b
	I0805 16:21:24.014572    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:24.513607    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:24.513620    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:24.513626    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:24.513629    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:24.515210    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:24.515220    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:24.515242    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:24.515253    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:24.515260    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:24.515264    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:24.515268    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:24 GMT
	I0805 16:21:24.515271    4640 round_trippers.go:580]     Audit-Id: 0a897d84-d437-4212-b36d-e414fedf55d4
	I0805 16:21:24.515427    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:25.013253    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:25.013272    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:25.013283    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:25.013321    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:25.015275    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:25.015308    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:25.015317    4640 round_trippers.go:580]     Audit-Id: ced7b45c-a072-4322-89ab-d0cc21ddfb1d
	I0805 16:21:25.015322    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:25.015325    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:25.015328    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:25.015332    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:25.015336    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:25 GMT
	I0805 16:21:25.015627    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:25.015849    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:25.512881    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:25.512902    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:25.512914    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:25.512920    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:25.515502    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:25.515517    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:25.515524    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:25.515529    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:25.515534    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:25.515538    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:25.515542    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:25 GMT
	I0805 16:21:25.515545    4640 round_trippers.go:580]     Audit-Id: dd6b59c1-dde3-4d67-b446-8823ad717d4f
	I0805 16:21:25.515665    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:26.013787    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:26.013811    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:26.013824    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:26.013830    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:26.016420    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:26.016440    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:26.016463    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:26 GMT
	I0805 16:21:26.016470    4640 round_trippers.go:580]     Audit-Id: 19939705-2879-44e6-830c-0c86394087ed
	I0805 16:21:26.016473    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:26.016485    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:26.016490    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:26.016494    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:26.016965    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:26.512523    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:26.512536    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:26.512541    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:26.512544    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:26.514158    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:26.514167    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:26.514172    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:26.514176    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:26.514179    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:26.514182    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:26 GMT
	I0805 16:21:26.514184    4640 round_trippers.go:580]     Audit-Id: f2346665-2701-41e1-94b0-41a70aa2f170
	I0805 16:21:26.514187    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:26.514489    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:27.013107    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:27.013136    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:27.013148    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:27.013155    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:27.015615    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:27.015632    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:27.015639    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:27 GMT
	I0805 16:21:27.015655    4640 round_trippers.go:580]     Audit-Id: 6abee22d-c1db-48e9-99db-e07791ed571f
	I0805 16:21:27.015661    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:27.015664    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:27.015667    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:27.015672    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:27.015747    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:27.015996    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:27.513549    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:27.513570    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:27.513582    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:27.513589    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:27.516173    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:27.516189    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:27.516197    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:27.516200    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:27.516204    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:27.516209    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:27 GMT
	I0805 16:21:27.516212    4640 round_trippers.go:580]     Audit-Id: a227585b-ae23-4bd1-b1dc-643eadd970cc
	I0805 16:21:27.516215    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:27.516416    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:28.014104    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:28.014132    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:28.014143    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:28.014159    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:28.016690    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:28.016705    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:28.016713    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:28.016717    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:28.016721    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:28.016725    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:28.016728    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:28 GMT
	I0805 16:21:28.016731    4640 round_trippers.go:580]     Audit-Id: 0d14831c-cc1f-41a9-a252-85e191b9594d
	I0805 16:21:28.016834    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:28.512703    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:28.512726    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:28.512739    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:28.512747    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:28.515176    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:28.515190    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:28.515197    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:28 GMT
	I0805 16:21:28.515201    4640 round_trippers.go:580]     Audit-Id: 6af459f8-bb08-43bf-ac7f-51ccacd5d664
	I0805 16:21:28.515206    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:28.515211    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:28.515215    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:28.515219    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:28.515378    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:29.013324    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:29.013354    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:29.013360    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:29.013364    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:29.014793    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:29.014804    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:29.014809    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:29.014813    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:29 GMT
	I0805 16:21:29.014817    4640 round_trippers.go:580]     Audit-Id: 2e50ff34-0c55-4136-b537-eee73f73706d
	I0805 16:21:29.014819    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:29.014822    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:29.014826    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:29.015098    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:29.513802    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:29.513832    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:29.513844    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:29.513852    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:29.516479    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:29.516496    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:29.516504    4640 round_trippers.go:580]     Audit-Id: bcbc3920-26b4-45f4-b91a-ce0e3dc11770
	I0805 16:21:29.516529    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:29.516538    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:29.516544    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:29.516549    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:29.516554    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:29 GMT
	I0805 16:21:29.516682    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:29.516938    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:30.013325    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:30.013349    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:30.013436    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:30.013448    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:30.016209    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:30.016222    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:30.016228    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:30.016233    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:30 GMT
	I0805 16:21:30.016238    4640 round_trippers.go:580]     Audit-Id: fb0bd3e0-89c3-4c77-a27d-be315cab22b7
	I0805 16:21:30.016242    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:30.016277    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:30.016283    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:30.016477    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:30.514344    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:30.514386    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:30.514482    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:30.514494    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:30.518828    4640 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 16:21:30.518860    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:30.518870    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:30.518876    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:30.518882    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:30 GMT
	I0805 16:21:30.518888    4640 round_trippers.go:580]     Audit-Id: c1b08932-ee78-4dcb-a190-3a8b24421284
	I0805 16:21:30.518894    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:30.518899    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:30.519002    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:31.012673    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:31.012701    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:31.012712    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:31.012718    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:31.015543    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:31.015560    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:31.015568    4640 round_trippers.go:580]     Audit-Id: b6586a64-ec07-44ee-8a00-1f3b8a00e0bd
	I0805 16:21:31.015572    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:31.015576    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:31.015580    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:31.015583    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:31.015589    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:31 GMT
	I0805 16:21:31.015682    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:31.512531    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:31.512543    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:31.512550    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:31.512554    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:31.514066    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:31.514076    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:31.514081    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:31 GMT
	I0805 16:21:31.514085    4640 round_trippers.go:580]     Audit-Id: 7d410de7-b0d5-4d4e-8455-d31b0df7d302
	I0805 16:21:31.514089    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:31.514093    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:31.514096    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:31.514107    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:31.514758    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:32.014110    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:32.014136    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:32.014147    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:32.014157    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:32.016553    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:32.016570    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:32.016580    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:32.016586    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:32.016592    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:32.016598    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:32.016602    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:32 GMT
	I0805 16:21:32.016605    4640 round_trippers.go:580]     Audit-Id: 67fdb64b-273a-46c2-aac5-c3b115422aa4
	I0805 16:21:32.016861    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:32.017132    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:32.513171    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:32.513188    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:32.513195    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:32.513198    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:32.514908    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:32.514920    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:32.514925    4640 round_trippers.go:580]     Audit-Id: 0f5a2e98-6be6-4963-8897-91c70642048c
	I0805 16:21:32.514928    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:32.514931    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:32.514933    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:32.514936    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:32.514939    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:32 GMT
	I0805 16:21:32.515082    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:33.013769    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:33.013803    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:33.013814    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:33.013822    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:33.016491    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:33.016509    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:33.016519    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:33 GMT
	I0805 16:21:33.016526    4640 round_trippers.go:580]     Audit-Id: 96b5f269-7be9-42a9-9687-cba57d05f76e
	I0805 16:21:33.016532    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:33.016538    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:33.016543    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:33.016548    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:33.016715    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:33.512751    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:33.512772    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:33.512783    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:33.512789    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:33.515431    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:33.515480    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:33.515498    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:33.515506    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:33.515510    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:33 GMT
	I0805 16:21:33.515513    4640 round_trippers.go:580]     Audit-Id: 6cd252a3-d07d-441e-bcf4-bc3bd00c2488
	I0805 16:21:33.515517    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:33.515520    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:33.515747    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:34.013003    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:34.013032    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:34.013043    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:34.013052    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:34.015447    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:34.015465    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:34.015472    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:34.015476    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:34.015479    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:34.015484    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:34.015487    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:34 GMT
	I0805 16:21:34.015492    4640 round_trippers.go:580]     Audit-Id: efcfb0d1-8345-4db5-bce9-e31085842da3
	I0805 16:21:34.015599    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:34.513298    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:34.513317    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:34.513376    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:34.513383    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:34.515051    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:34.515065    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:34.515072    4640 round_trippers.go:580]     Audit-Id: 2a42cb6a-0051-47bd-85f4-9f8ca80afa70
	I0805 16:21:34.515078    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:34.515081    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:34.515087    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:34.515099    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:34.515103    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:34 GMT
	I0805 16:21:34.515359    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:34.515540    4640 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:21:35.013932    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:35.013957    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:35.013968    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:35.013976    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:35.016505    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:35.016524    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:35.016530    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:35.016537    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:35.016541    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:35.016544    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:35.016555    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:35 GMT
	I0805 16:21:35.016559    4640 round_trippers.go:580]     Audit-Id: 09fa0e04-c026-439e-9cd7-392fd82b16fe
	I0805 16:21:35.016913    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:35.513491    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:35.513514    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:35.513526    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:35.513532    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:35.515995    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:35.516012    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:35.516020    4640 round_trippers.go:580]     Audit-Id: a2b05a8a-9a91-4d20-93d0-b8701ac59b95
	I0805 16:21:35.516024    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:35.516036    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:35.516041    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:35.516055    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:35.516060    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:35 GMT
	I0805 16:21:35.516151    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"343","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4927 chars]
	I0805 16:21:36.013521    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:36.013549    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.013561    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.013566    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.016095    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:36.016112    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.016119    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.016131    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.016136    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.016140    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.016144    4640 round_trippers.go:580]     Audit-Id: 77e04f39-a037-4ea2-9716-ad04139089d1
	I0805 16:21:36.016147    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.016230    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"423","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0805 16:21:36.016465    4640 node_ready.go:49] node "multinode-985000" has status "Ready":"True"
	I0805 16:21:36.016481    4640 node_ready.go:38] duration metric: took 15.504115701s for node "multinode-985000" to be "Ready" ...
	I0805 16:21:36.016489    4640 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:21:36.016543    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:36.016551    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.016559    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.016563    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.019046    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:36.019057    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.019065    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.019069    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.019078    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.019081    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.019084    4640 round_trippers.go:580]     Audit-Id: 96048303-6e62-4ba8-a291-bc1ad976756e
	I0805 16:21:36.019091    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.019721    4640 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"427","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56289 chars]
	I0805 16:21:36.021921    4640 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:36.021960    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:21:36.021964    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.021970    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.021974    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.023179    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:36.023187    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.023192    4640 round_trippers.go:580]     Audit-Id: ba42f387-f106-4773-86de-3a22085fd86a
	I0805 16:21:36.023195    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.023198    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.023200    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.023204    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.023208    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.023410    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"427","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0805 16:21:36.023652    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:36.023659    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.023665    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.023671    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.024732    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:36.024744    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.024752    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.024758    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.024765    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.024768    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.024771    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.024775    4640 round_trippers.go:580]     Audit-Id: 2008721c-b230-4e73-b037-d3a843d7c7c8
	I0805 16:21:36.024909    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"423","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0805 16:21:36.523495    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:21:36.523508    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.523514    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.523519    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.525003    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:36.525014    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.525020    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.525042    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.525049    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.525053    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.525060    4640 round_trippers.go:580]     Audit-Id: 1ad5a8dd-64b3-4881-9a8e-e5eaab368c53
	I0805 16:21:36.525066    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.525202    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"427","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0805 16:21:36.525483    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:36.525490    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:36.525498    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:36.525502    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:36.526801    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:36.526810    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:36.526814    4640 round_trippers.go:580]     Audit-Id: 71c4017f-a267-489e-86ed-59098eae3b88
	I0805 16:21:36.526817    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:36.526834    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:36.526840    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:36.526846    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:36.526850    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:36 GMT
	I0805 16:21:36.527025    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"423","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4782 chars]
	I0805 16:21:37.022759    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:21:37.022781    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.022791    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.022799    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.025487    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:37.025503    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.025510    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.025515    4640 round_trippers.go:580]     Audit-Id: 7446d9fd-22ed-4d20-b0f2-e8c4a88b04f4
	I0805 16:21:37.025536    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.025543    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.025547    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.025556    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.025649    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"427","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6445 chars]
	I0805 16:21:37.026010    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.026020    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.026028    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.026033    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.027337    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:37.027346    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.027354    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.027359    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.027363    4640 round_trippers.go:580]     Audit-Id: a309eed4-f088-47f7-8b84-4761b59dbb8c
	I0805 16:21:37.027366    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.027368    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.027371    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.027425    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.522283    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:21:37.522304    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.522315    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.522322    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.524762    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:37.524776    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.524782    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.524788    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.524792    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.524795    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.524799    4640 round_trippers.go:580]     Audit-Id: eaef42a8-7b43-4091-9b70-8d31adc979e5
	I0805 16:21:37.524803    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.525073    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"443","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6576 chars]
	I0805 16:21:37.525438    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.525480    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.525488    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.525492    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.526890    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:37.526903    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.526912    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.526918    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.526927    4640 round_trippers.go:580]     Audit-Id: a3a0e71a-c982-4504-9fae-e76101688c05
	I0805 16:21:37.526931    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.526935    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.526937    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.527034    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.527211    4640 pod_ready.go:92] pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.527220    4640 pod_ready.go:81] duration metric: took 1.505289062s for pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.527230    4640 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.527259    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-985000
	I0805 16:21:37.527264    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.527269    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.527277    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.528379    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:37.528389    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.528394    4640 round_trippers.go:580]     Audit-Id: 3cf4f372-47fb-4b72-9b30-185d93d01537
	I0805 16:21:37.528401    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.528405    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.528408    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.528411    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.528414    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.528618    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-985000","namespace":"kube-system","uid":"8d7ca2d9-8c7b-41b9-a199-de6449107471","resourceVersion":"379","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.mirror":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.seen":"2024-08-05T23:21:06.366030299Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0805 16:21:37.528833    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.528840    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.528845    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.528850    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.529802    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.529808    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.529813    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.529816    4640 round_trippers.go:580]     Audit-Id: 314df0bd-894e-4607-bad0-3348c18fe807
	I0805 16:21:37.529820    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.529823    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.529826    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.529833    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.530046    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.530203    4640 pod_ready.go:92] pod "etcd-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.530210    4640 pod_ready.go:81] duration metric: took 2.974841ms for pod "etcd-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.530218    4640 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.530249    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-985000
	I0805 16:21:37.530253    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.530259    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.530262    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.531449    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:37.531456    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.531461    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.531463    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.531467    4640 round_trippers.go:580]     Audit-Id: 1801a8f0-22d5-44e8-942c-ea521b1ffa66
	I0805 16:21:37.531469    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.531475    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.531477    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.531592    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-985000","namespace":"kube-system","uid":"9be3378a-5fab-4907-baad-507918e714e4","resourceVersion":"369","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"5908531d711118eab279d6b15448dc42","kubernetes.io/config.mirror":"5908531d711118eab279d6b15448dc42","kubernetes.io/config.seen":"2024-08-05T23:21:06.366030949Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 7684 chars]
	I0805 16:21:37.531810    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.531820    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.531825    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.531830    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.532663    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.532668    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.532672    4640 round_trippers.go:580]     Audit-Id: 6d0fc4ed-c609-4ee7-a57f-b61eed1bc442
	I0805 16:21:37.532675    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.532679    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.532682    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.532684    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.532688    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.532807    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.532958    4640 pod_ready.go:92] pod "kube-apiserver-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.532967    4640 pod_ready.go:81] duration metric: took 2.743443ms for pod "kube-apiserver-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.532973    4640 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.533000    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-985000
	I0805 16:21:37.533004    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.533009    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.533012    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.533987    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.533995    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.534000    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.534004    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.534020    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.534027    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.534031    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.534034    4640 round_trippers.go:580]     Audit-Id: 97e4dc5c-f4bf-419e-8b15-be800418054c
	I0805 16:21:37.534147    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-985000","namespace":"kube-system","uid":"4ad64361-65de-4b0b-b2a3-07df18c2e603","resourceVersion":"342","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e41fb21b40cd2f3bd83b000891f6569","kubernetes.io/config.mirror":"8e41fb21b40cd2f3bd83b000891f6569","kubernetes.io/config.seen":"2024-08-05T23:21:06.366027130Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7259 chars]
	I0805 16:21:37.534370    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.534377    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.534383    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.534386    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.535293    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.535301    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.535305    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.535308    4640 round_trippers.go:580]     Audit-Id: a4c04a0a-9401-41d1-a0fc-f2a2187abde4
	I0805 16:21:37.535310    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.535313    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.535320    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.535323    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.535432    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.535591    4640 pod_ready.go:92] pod "kube-controller-manager-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.535599    4640 pod_ready.go:81] duration metric: took 2.621545ms for pod "kube-controller-manager-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.535606    4640 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fwgw7" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.535629    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwgw7
	I0805 16:21:37.535634    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.535639    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.535643    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.536550    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:37.536557    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.536565    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.536570    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.536575    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.536578    4640 round_trippers.go:580]     Audit-Id: 5a688e80-7db3-4070-a1a8-c3419ddb4d44
	I0805 16:21:37.536580    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.536582    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.536704    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fwgw7","generateName":"kube-proxy-","namespace":"kube-system","uid":"3fb72e39-699d-4123-ae5e-e314a191d904","resourceVersion":"409","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8b6258e6-7b31-4600-b32b-4a269867c123","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8b6258e6-7b31-4600-b32b-4a269867c123\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0805 16:21:37.614745    4640 request.go:629] Waited for 77.807971ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.614815    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:37.614822    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.614839    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.614845    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.616956    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:37.616984    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.616989    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.616993    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.616996    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.616999    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.617002    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:37 GMT
	I0805 16:21:37.617005    4640 round_trippers.go:580]     Audit-Id: e297627c-4c52-417b-935c-d406bf086c16
	I0805 16:21:37.617232    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:37.617428    4640 pod_ready.go:92] pod "kube-proxy-fwgw7" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:37.617437    4640 pod_ready.go:81] duration metric: took 81.82693ms for pod "kube-proxy-fwgw7" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.617444    4640 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:37.815296    4640 request.go:629] Waited for 197.761592ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-985000
	I0805 16:21:37.815347    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-985000
	I0805 16:21:37.815355    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:37.815366    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:37.815376    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:37.817961    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:37.817976    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:37.818001    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:37.818008    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:37.818049    4640 round_trippers.go:580]     Audit-Id: cc44c4e8-8012-4718-aa24-c05fec399a2e
	I0805 16:21:37.818064    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:37.818078    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:37.818082    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:37.818186    4640 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-985000","namespace":"kube-system","uid":"5e23b1b7-e45d-4b43-831c-aa835c5e536d","resourceVersion":"396","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d110ae14602908970c81c0d8a5c21147","kubernetes.io/config.mirror":"d110ae14602908970c81c0d8a5c21147","kubernetes.io/config.seen":"2024-08-05T23:21:06.366029633Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0805 16:21:38.014472    4640 request.go:629] Waited for 195.947535ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:38.014569    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:21:38.014578    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.014589    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.014597    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.017395    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:38.017406    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.017413    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.017418    4640 round_trippers.go:580]     Audit-Id: 925efcbc-f43b-4431-905e-26927bb76a48
	I0805 16:21:38.017422    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.017428    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.017434    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.017441    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.017905    4640 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","fi [truncated 4958 chars]
	I0805 16:21:38.018153    4640 pod_ready.go:92] pod "kube-scheduler-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:21:38.018164    4640 pod_ready.go:81] duration metric: took 400.713995ms for pod "kube-scheduler-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:21:38.018173    4640 pod_ready.go:38] duration metric: took 2.001673669s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:21:38.018198    4640 api_server.go:52] waiting for apiserver process to appear ...
	I0805 16:21:38.018268    4640 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:21:38.030133    4640 command_runner.go:130] > 1977
	I0805 16:21:38.030360    4640 api_server.go:72] duration metric: took 18.07694495s to wait for apiserver process to appear ...
	I0805 16:21:38.030369    4640 api_server.go:88] waiting for apiserver healthz status ...
	I0805 16:21:38.030384    4640 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:21:38.034009    4640 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0805 16:21:38.034048    4640 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I0805 16:21:38.034052    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.034058    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.034063    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.034646    4640 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:21:38.034653    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.034658    4640 round_trippers.go:580]     Audit-Id: 9f5c9766-330c-4bb5-a5de-4c3a0fdbe474
	I0805 16:21:38.034662    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.034665    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.034668    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.034670    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.034673    4640 round_trippers.go:580]     Content-Length: 263
	I0805 16:21:38.034676    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.034687    4640 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0805 16:21:38.034733    4640 api_server.go:141] control plane version: v1.30.3
	I0805 16:21:38.034742    4640 api_server.go:131] duration metric: took 4.369143ms to wait for apiserver health ...
	I0805 16:21:38.034747    4640 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 16:21:38.213812    4640 request.go:629] Waited for 178.999213ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:38.213950    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:38.213960    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.213970    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.213980    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.217309    4640 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:21:38.217324    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.217331    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.217336    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.217363    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.217372    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.217377    4640 round_trippers.go:580]     Audit-Id: 0f21513f-44e7-4d2f-bacd-2a12fceef757
	I0805 16:21:38.217381    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.217979    4640 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"443","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0805 16:21:38.219249    4640 system_pods.go:59] 8 kube-system pods found
	I0805 16:21:38.219261    4640 system_pods.go:61] "coredns-7db6d8ff4d-fqtll" [4d8af129-475b-4185-8b0d-cbda67812964] Running
	I0805 16:21:38.219265    4640 system_pods.go:61] "etcd-multinode-985000" [8d7ca2d9-8c7b-41b9-a199-de6449107471] Running
	I0805 16:21:38.219268    4640 system_pods.go:61] "kindnet-tvtvg" [7dd4afe7-2a17-4298-823b-9955e43cfdb2] Running
	I0805 16:21:38.219271    4640 system_pods.go:61] "kube-apiserver-multinode-985000" [9be3378a-5fab-4907-baad-507918e714e4] Running
	I0805 16:21:38.219276    4640 system_pods.go:61] "kube-controller-manager-multinode-985000" [4ad64361-65de-4b0b-b2a3-07df18c2e603] Running
	I0805 16:21:38.219278    4640 system_pods.go:61] "kube-proxy-fwgw7" [3fb72e39-699d-4123-ae5e-e314a191d904] Running
	I0805 16:21:38.219280    4640 system_pods.go:61] "kube-scheduler-multinode-985000" [5e23b1b7-e45d-4b43-831c-aa835c5e536d] Running
	I0805 16:21:38.219283    4640 system_pods.go:61] "storage-provisioner" [72ec8458-5c62-43eb-9120-0146e6ccaf8f] Running
	I0805 16:21:38.219286    4640 system_pods.go:74] duration metric: took 184.535842ms to wait for pod list to return data ...
	I0805 16:21:38.219291    4640 default_sa.go:34] waiting for default service account to be created ...
	I0805 16:21:38.413643    4640 request.go:629] Waited for 194.308242ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:21:38.413680    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:21:38.413687    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.413695    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.413699    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.415522    4640 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:21:38.415531    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.415536    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.415539    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.415543    4640 round_trippers.go:580]     Content-Length: 261
	I0805 16:21:38.415546    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.415548    4640 round_trippers.go:580]     Audit-Id: efc85c0c-9bbc-4cb7-8c14-19ba2f873800
	I0805 16:21:38.415551    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.415553    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.415563    4640 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b0626468-f73b-4e9b-8270-658495d43f4a","resourceVersion":"337","creationTimestamp":"2024-08-05T23:21:19Z"}}]}
	I0805 16:21:38.415681    4640 default_sa.go:45] found service account: "default"
	I0805 16:21:38.415690    4640 default_sa.go:55] duration metric: took 196.394719ms for default service account to be created ...
	I0805 16:21:38.415697    4640 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 16:21:38.613742    4640 request.go:629] Waited for 198.012461ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:38.613858    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:21:38.613864    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.613870    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.613874    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.616077    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:38.616090    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.616099    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:38 GMT
	I0805 16:21:38.616106    4640 round_trippers.go:580]     Audit-Id: 3f8a6f23-788b-41c4-8dee-6ff59c02c21d
	I0805 16:21:38.616112    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.616116    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.616126    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.616143    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.616489    4640 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"448"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"443","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56403 chars]
	I0805 16:21:38.617747    4640 system_pods.go:86] 8 kube-system pods found
	I0805 16:21:38.617761    4640 system_pods.go:89] "coredns-7db6d8ff4d-fqtll" [4d8af129-475b-4185-8b0d-cbda67812964] Running
	I0805 16:21:38.617766    4640 system_pods.go:89] "etcd-multinode-985000" [8d7ca2d9-8c7b-41b9-a199-de6449107471] Running
	I0805 16:21:38.617770    4640 system_pods.go:89] "kindnet-tvtvg" [7dd4afe7-2a17-4298-823b-9955e43cfdb2] Running
	I0805 16:21:38.617773    4640 system_pods.go:89] "kube-apiserver-multinode-985000" [9be3378a-5fab-4907-baad-507918e714e4] Running
	I0805 16:21:38.617776    4640 system_pods.go:89] "kube-controller-manager-multinode-985000" [4ad64361-65de-4b0b-b2a3-07df18c2e603] Running
	I0805 16:21:38.617780    4640 system_pods.go:89] "kube-proxy-fwgw7" [3fb72e39-699d-4123-ae5e-e314a191d904] Running
	I0805 16:21:38.617784    4640 system_pods.go:89] "kube-scheduler-multinode-985000" [5e23b1b7-e45d-4b43-831c-aa835c5e536d] Running
	I0805 16:21:38.617787    4640 system_pods.go:89] "storage-provisioner" [72ec8458-5c62-43eb-9120-0146e6ccaf8f] Running
	I0805 16:21:38.617792    4640 system_pods.go:126] duration metric: took 202.090644ms to wait for k8s-apps to be running ...
	I0805 16:21:38.617801    4640 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 16:21:38.617848    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:21:38.629448    4640 system_svc.go:56] duration metric: took 11.643357ms WaitForService to wait for kubelet
	I0805 16:21:38.629463    4640 kubeadm.go:582] duration metric: took 18.676048708s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:21:38.629475    4640 node_conditions.go:102] verifying NodePressure condition ...
	I0805 16:21:38.814057    4640 request.go:629] Waited for 184.539621ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I0805 16:21:38.814182    4640 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0805 16:21:38.814193    4640 round_trippers.go:469] Request Headers:
	I0805 16:21:38.814205    4640 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:21:38.814213    4640 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:21:38.817076    4640 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:21:38.817092    4640 round_trippers.go:577] Response Headers:
	I0805 16:21:38.817099    4640 round_trippers.go:580]     Audit-Id: 83bb2c88-8ae3-45b7-a0f6-9d3f9fead5f2
	I0805 16:21:38.817103    4640 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:21:38.817112    4640 round_trippers.go:580]     Content-Type: application/json
	I0805 16:21:38.817116    4640 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:21:38.817123    4640 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:21:38.817128    4640 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:21:39 GMT
	I0805 16:21:38.817200    4640 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"449"},"items":[{"metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"438","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5011 chars]
	I0805 16:21:38.817474    4640 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:21:38.817490    4640 node_conditions.go:123] node cpu capacity is 2
	I0805 16:21:38.817502    4640 node_conditions.go:105] duration metric: took 188.023135ms to run NodePressure ...
	I0805 16:21:38.817512    4640 start.go:241] waiting for startup goroutines ...
	I0805 16:21:38.817520    4640 start.go:246] waiting for cluster config update ...
	I0805 16:21:38.817530    4640 start.go:255] writing updated cluster config ...
	I0805 16:21:38.838343    4640 out.go:177] 
	I0805 16:21:38.859405    4640 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:21:38.859465    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:21:38.881260    4640 out.go:177] * Starting "multinode-985000-m02" worker node in "multinode-985000" cluster
	I0805 16:21:38.923226    4640 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:21:38.923254    4640 cache.go:56] Caching tarball of preloaded images
	I0805 16:21:38.923425    4640 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:21:38.923439    4640 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:21:38.923503    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:21:38.924257    4640 start.go:360] acquireMachinesLock for multinode-985000-m02: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:21:38.924355    4640 start.go:364] duration metric: took 78.775µs to acquireMachinesLock for "multinode-985000-m02"
	I0805 16:21:38.924379    4640 start.go:93] Provisioning new machine with config: &{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0805 16:21:38.924443    4640 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0805 16:21:38.946258    4640 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0805 16:21:38.946431    4640 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:21:38.946482    4640 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:21:38.956315    4640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52515
	I0805 16:21:38.956651    4640 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:21:38.957008    4640 main.go:141] libmachine: Using API Version  1
	I0805 16:21:38.957028    4640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:21:38.957245    4640 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:21:38.957408    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:21:38.957527    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:38.957642    4640 start.go:159] libmachine.API.Create for "multinode-985000" (driver="hyperkit")
	I0805 16:21:38.957663    4640 client.go:168] LocalClient.Create starting
	I0805 16:21:38.957697    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem
	I0805 16:21:38.957735    4640 main.go:141] libmachine: Decoding PEM data...
	I0805 16:21:38.957747    4640 main.go:141] libmachine: Parsing certificate...
	I0805 16:21:38.957790    4640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem
	I0805 16:21:38.957819    4640 main.go:141] libmachine: Decoding PEM data...
	I0805 16:21:38.957833    4640 main.go:141] libmachine: Parsing certificate...
	I0805 16:21:38.957849    4640 main.go:141] libmachine: Running pre-create checks...
	I0805 16:21:38.957855    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .PreCreateCheck
	I0805 16:21:38.957933    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:38.957959    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetConfigRaw
	I0805 16:21:38.967700    4640 main.go:141] libmachine: Creating machine...
	I0805 16:21:38.967725    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .Create
	I0805 16:21:38.967957    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:38.968233    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | I0805 16:21:38.967940    4677 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:21:38.968338    4640 main.go:141] libmachine: (multinode-985000-m02) Downloading /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19373-1122/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso...
	I0805 16:21:39.171726    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | I0805 16:21:39.171650    4677 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa...
	I0805 16:21:39.251408    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | I0805 16:21:39.251327    4677 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/multinode-985000-m02.rawdisk...
	I0805 16:21:39.251421    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Writing magic tar header
	I0805 16:21:39.251439    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Writing SSH key tar header
	I0805 16:21:39.252021    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | I0805 16:21:39.251983    4677 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02 ...
	I0805 16:21:39.622286    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:39.622309    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid
	I0805 16:21:39.622382    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Using UUID ab5b9c9f-9e28-4bc2-8fcd-b98fce011173
	I0805 16:21:39.647304    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Generated MAC a6:1c:88:9c:44:3
	I0805 16:21:39.647324    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000
	I0805 16:21:39.647363    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0805 16:21:39.647396    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d2240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0805 16:21:39.647440    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/multinode-985000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage,/Users/j
enkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"}
	I0805 16:21:39.647475    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ab5b9c9f-9e28-4bc2-8fcd-b98fce011173 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/multinode-985000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/mult
inode-985000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"
	I0805 16:21:39.647493    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:21:39.650407    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 DEBUG: hyperkit: Pid is 4678
	I0805 16:21:39.650823    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 0
	I0805 16:21:39.650838    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:39.650909    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:39.651807    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:39.651870    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:39.651899    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:39.651984    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:39.652006    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:39.652022    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:39.652032    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:39.652039    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:39.652046    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:39.652082    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:39.652100    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:39.652113    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:39.652123    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:39.652143    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:39.657903    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:21:39.666018    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:21:39.666937    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:21:39.666963    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:21:39.666975    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:21:39.666990    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:39 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:21:40.050205    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:21:40.050221    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:21:40.165006    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:21:40.165028    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:21:40.165042    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:21:40.165049    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:21:40.165899    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:21:40.165911    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:40 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:21:41.653048    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 1
	I0805 16:21:41.653066    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:41.653144    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:41.653911    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:41.653968    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:41.653979    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:41.653992    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:41.653998    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:41.654006    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:41.654015    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:41.654030    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:41.654045    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:41.654053    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:41.654061    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:41.654070    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:41.654078    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:41.654093    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:43.655366    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 2
	I0805 16:21:43.655382    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:43.655471    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:43.656243    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:43.656291    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:43.656301    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:43.656319    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:43.656329    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:43.656351    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:43.656362    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:43.656369    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:43.656375    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:43.656391    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:43.656406    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:43.656416    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:43.656423    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:43.656437    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:45.657345    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 3
	I0805 16:21:45.657361    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:45.657459    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:45.658214    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:45.658269    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:45.658278    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:45.658286    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:45.658295    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:45.658310    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:45.658321    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:45.658329    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:45.658337    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:45.658349    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:45.658362    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:45.658370    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:45.658378    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:45.658387    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:45.751756    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:45 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:21:45.751812    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:45 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:21:45.751830    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:45 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:21:45.774801    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:21:45 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:21:47.659182    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 4
	I0805 16:21:47.659208    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:47.659291    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:47.660062    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:47.660112    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 12 entries in /var/db/dhcpd_leases!
	I0805 16:21:47.660128    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:21:47.660137    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:be:c6:c8:e8:40:a6 ID:1,be:c6:c8:e8:40:a6 Lease:0x66b15dab}
	I0805 16:21:47.660145    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:12:2f:22:ec:d8:2a ID:1,12:2f:22:ec:d8:2a Lease:0x66b2aee6}
	I0805 16:21:47.660153    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:fa:cb:3b:41:e8:4a ID:1,fa:cb:3b:41:e8:4a Lease:0x66b2ae80}
	I0805 16:21:47.660162    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:22:74:d2:e0:b1:80 ID:1,22:74:d2:e0:b1:80 Lease:0x66b2ae53}
	I0805 16:21:47.660178    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:2e:80:64:4a:6a:1a ID:1,2e:80:64:4a:6a:1a Lease:0x66b2ad70}
	I0805 16:21:47.660192    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:5e:e5:6c:f1:60:ca ID:1,5e:e5:6c:f1:60:ca Lease:0x66b15c55}
	I0805 16:21:47.660204    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:b2:64:5d:40:b:b5 ID:1,b2:64:5d:40:b:b5 Lease:0x66b2ad10}
	I0805 16:21:47.660218    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:3e:79:a8:cb:37:4b ID:1,3e:79:a8:cb:37:4b Lease:0x66b2adfd}
	I0805 16:21:47.660230    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:ea:7a:62:c8:77:71 ID:1,ea:7a:62:c8:77:71 Lease:0x66b2aa87}
	I0805 16:21:47.660240    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:2e:c7:12:97:7a:17 ID:1,2e:c7:12:97:7a:17 Lease:0x66b15865}
	I0805 16:21:47.660260    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:4e:2c:40:42:c9:36 ID:1,4e:2c:40:42:c9:36 Lease:0x66b2a828}
	I0805 16:21:49.662115    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 5
	I0805 16:21:49.662148    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:49.662310    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:49.663748    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:21:49.663812    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 13 entries in /var/db/dhcpd_leases!
	I0805 16:21:49.663831    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b00c}
	I0805 16:21:49.663846    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | Found match: a6:1c:88:9c:44:3
	I0805 16:21:49.663856    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | IP: 192.169.0.14
	I0805 16:21:49.663945    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetConfigRaw
	I0805 16:21:49.664855    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:49.665006    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:49.665127    4640 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0805 16:21:49.665139    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetState
	I0805 16:21:49.665271    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:21:49.665344    4640 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:21:49.666326    4640 main.go:141] libmachine: Detecting operating system of created instance...
	I0805 16:21:49.666337    4640 main.go:141] libmachine: Waiting for SSH to be available...
	I0805 16:21:49.666342    4640 main.go:141] libmachine: Getting to WaitForSSH function...
	I0805 16:21:49.666348    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.666471    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:49.666603    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.666743    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.666869    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:49.667045    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:49.667279    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:49.667287    4640 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0805 16:21:49.724369    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:21:49.724382    4640 main.go:141] libmachine: Detecting the provisioner...
	I0805 16:21:49.724388    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.724522    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:49.724626    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.724719    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.724810    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:49.724938    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:49.725087    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:49.725094    4640 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0805 16:21:49.782403    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0805 16:21:49.782454    4640 main.go:141] libmachine: found compatible host: buildroot
	I0805 16:21:49.782460    4640 main.go:141] libmachine: Provisioning with buildroot...
	I0805 16:21:49.782466    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:21:49.782595    4640 buildroot.go:166] provisioning hostname "multinode-985000-m02"
	I0805 16:21:49.782606    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:21:49.782698    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.782797    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:49.782871    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.782964    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.783079    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:49.783204    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:49.783350    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:49.783359    4640 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-985000-m02 && echo "multinode-985000-m02" | sudo tee /etc/hostname
	I0805 16:21:49.854175    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-985000-m02
	
	I0805 16:21:49.854190    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.854319    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:49.854421    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.854492    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:49.854587    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:49.854712    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:49.854870    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:49.854882    4640 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-985000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-985000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-985000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:21:49.917814    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:21:49.917830    4640 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:21:49.917840    4640 buildroot.go:174] setting up certificates
	I0805 16:21:49.917846    4640 provision.go:84] configureAuth start
	I0805 16:21:49.917856    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:21:49.917985    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:21:49.918095    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:49.918192    4640 provision.go:143] copyHostCerts
	I0805 16:21:49.918223    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:21:49.918280    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:21:49.918285    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:21:49.918411    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:21:49.918617    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:21:49.918652    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:21:49.918658    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:21:49.918733    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:21:49.918888    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:21:49.918922    4640 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:21:49.918927    4640 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:21:49.918994    4640 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:21:49.919145    4640 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.multinode-985000-m02 san=[127.0.0.1 192.169.0.14 localhost minikube multinode-985000-m02]
	I0805 16:21:50.072896    4640 provision.go:177] copyRemoteCerts
	I0805 16:21:50.072947    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:21:50.072962    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:50.073107    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:50.073199    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.073317    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:50.073426    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:21:50.108446    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:21:50.108519    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:21:50.128617    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:21:50.128684    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0805 16:21:50.148653    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:21:50.148720    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:21:50.168682    4640 provision.go:87] duration metric: took 250.828344ms to configureAuth
	I0805 16:21:50.168695    4640 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:21:50.168835    4640 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:21:50.168849    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:50.168993    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:50.169087    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:50.169175    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.169262    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.169345    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:50.169486    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:50.169621    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:50.169628    4640 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:21:50.228062    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:21:50.228074    4640 buildroot.go:70] root file system type: tmpfs
	I0805 16:21:50.228150    4640 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:21:50.228164    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:50.228293    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:50.228388    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.228480    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.228586    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:50.228755    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:50.228888    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:50.228934    4640 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.13"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:21:50.296901    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.13
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:21:50.296919    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:50.297064    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:50.297158    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.297250    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:50.297333    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:50.297475    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:50.297611    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:50.297624    4640 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:21:51.873922    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:21:51.873940    4640 main.go:141] libmachine: Checking connection to Docker...
	I0805 16:21:51.873964    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetURL
	I0805 16:21:51.874107    4640 main.go:141] libmachine: Docker is up and running!
	I0805 16:21:51.874115    4640 main.go:141] libmachine: Reticulating splines...
	I0805 16:21:51.874120    4640 client.go:171] duration metric: took 12.916447572s to LocalClient.Create
	I0805 16:21:51.874129    4640 start.go:167] duration metric: took 12.916485141s to libmachine.API.Create "multinode-985000"
	I0805 16:21:51.874135    4640 start.go:293] postStartSetup for "multinode-985000-m02" (driver="hyperkit")
	I0805 16:21:51.874142    4640 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:21:51.874152    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:51.874292    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:21:51.874313    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:51.874416    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:51.874505    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:51.874583    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:51.874657    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:21:51.915394    4640 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:21:51.919538    4640 command_runner.go:130] > NAME=Buildroot
	I0805 16:21:51.919549    4640 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0805 16:21:51.919553    4640 command_runner.go:130] > ID=buildroot
	I0805 16:21:51.919557    4640 command_runner.go:130] > VERSION_ID=2023.02.9
	I0805 16:21:51.919560    4640 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0805 16:21:51.919635    4640 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:21:51.919645    4640 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:21:51.919746    4640 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:21:51.919897    4640 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:21:51.919903    4640 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:21:51.920070    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:21:51.929531    4640 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:21:51.959146    4640 start.go:296] duration metric: took 85.003807ms for postStartSetup
	I0805 16:21:51.959174    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetConfigRaw
	I0805 16:21:51.959830    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:21:51.959996    4640 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:21:51.960355    4640 start.go:128] duration metric: took 13.03589336s to createHost
	I0805 16:21:51.960370    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:51.960461    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:51.960532    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:51.960607    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:51.960679    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:51.960792    4640 main.go:141] libmachine: Using SSH client type: native
	I0805 16:21:51.960921    4640 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xd8600c0] 0xd862e20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:21:51.960928    4640 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 16:21:52.018527    4640 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722900112.019707412
	
	I0805 16:21:52.018539    4640 fix.go:216] guest clock: 1722900112.019707412
	I0805 16:21:52.018544    4640 fix.go:229] Guest: 2024-08-05 16:21:52.019707412 -0700 PDT Remote: 2024-08-05 16:21:51.960363 -0700 PDT m=+79.692294773 (delta=59.344412ms)
	I0805 16:21:52.018555    4640 fix.go:200] guest clock delta is within tolerance: 59.344412ms
	I0805 16:21:52.018561    4640 start.go:83] releasing machines lock for "multinode-985000-m02", held for 13.094193048s
	I0805 16:21:52.018577    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:52.018703    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:21:52.040117    4640 out.go:177] * Found network options:
	I0805 16:21:52.084887    4640 out.go:177]   - NO_PROXY=192.169.0.13
	W0805 16:21:52.106885    4640 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:21:52.106945    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:52.107811    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:52.108153    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:21:52.108320    4640 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:21:52.108371    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	W0805 16:21:52.108412    4640 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:21:52.108519    4640 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 16:21:52.108545    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:21:52.108628    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:52.108772    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:21:52.108842    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:52.108951    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:21:52.109026    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:52.109176    4640 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:21:52.109197    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:21:52.109323    4640 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:21:52.141829    4640 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0805 16:21:52.141939    4640 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:21:52.141993    4640 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:21:52.191903    4640 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0805 16:21:52.192466    4640 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0805 16:21:52.192507    4640 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:21:52.192514    4640 start.go:495] detecting cgroup driver to use...
	I0805 16:21:52.192581    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:21:52.208225    4640 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0805 16:21:52.208528    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:21:52.217078    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:21:52.225489    4640 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:21:52.225534    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:21:52.233992    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:21:52.242465    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:21:52.250835    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:21:52.260065    4640 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:21:52.268863    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:21:52.277242    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:21:52.285501    4640 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:21:52.293845    4640 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:21:52.301185    4640 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0805 16:21:52.301319    4640 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:21:52.308881    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:21:52.403323    4640 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:21:52.423722    4640 start.go:495] detecting cgroup driver to use...
	I0805 16:21:52.423794    4640 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:21:52.442557    4640 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0805 16:21:52.443108    4640 command_runner.go:130] > [Unit]
	I0805 16:21:52.443119    4640 command_runner.go:130] > Description=Docker Application Container Engine
	I0805 16:21:52.443124    4640 command_runner.go:130] > Documentation=https://docs.docker.com
	I0805 16:21:52.443128    4640 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0805 16:21:52.443132    4640 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0805 16:21:52.443136    4640 command_runner.go:130] > StartLimitBurst=3
	I0805 16:21:52.443141    4640 command_runner.go:130] > StartLimitIntervalSec=60
	I0805 16:21:52.443147    4640 command_runner.go:130] > [Service]
	I0805 16:21:52.443151    4640 command_runner.go:130] > Type=notify
	I0805 16:21:52.443155    4640 command_runner.go:130] > Restart=on-failure
	I0805 16:21:52.443160    4640 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13
	I0805 16:21:52.443165    4640 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0805 16:21:52.443175    4640 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0805 16:21:52.443182    4640 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0805 16:21:52.443188    4640 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0805 16:21:52.443194    4640 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0805 16:21:52.443200    4640 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0805 16:21:52.443212    4640 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0805 16:21:52.443224    4640 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0805 16:21:52.443231    4640 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0805 16:21:52.443234    4640 command_runner.go:130] > ExecStart=
	I0805 16:21:52.443246    4640 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0805 16:21:52.443250    4640 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0805 16:21:52.443256    4640 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0805 16:21:52.443262    4640 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0805 16:21:52.443265    4640 command_runner.go:130] > LimitNOFILE=infinity
	I0805 16:21:52.443269    4640 command_runner.go:130] > LimitNPROC=infinity
	I0805 16:21:52.443272    4640 command_runner.go:130] > LimitCORE=infinity
	I0805 16:21:52.443277    4640 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0805 16:21:52.443282    4640 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0805 16:21:52.443285    4640 command_runner.go:130] > TasksMax=infinity
	I0805 16:21:52.443290    4640 command_runner.go:130] > TimeoutStartSec=0
	I0805 16:21:52.443296    4640 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0805 16:21:52.443299    4640 command_runner.go:130] > Delegate=yes
	I0805 16:21:52.443304    4640 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0805 16:21:52.443313    4640 command_runner.go:130] > KillMode=process
	I0805 16:21:52.443317    4640 command_runner.go:130] > [Install]
	I0805 16:21:52.443321    4640 command_runner.go:130] > WantedBy=multi-user.target
	I0805 16:21:52.443454    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:21:52.455112    4640 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:21:52.472976    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:21:52.485648    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:21:52.496640    4640 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:21:52.520742    4640 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:21:52.532843    4640 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:21:52.547391    4640 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0805 16:21:52.547619    4640 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:21:52.550475    4640 command_runner.go:130] > /usr/bin/cri-dockerd
	I0805 16:21:52.550551    4640 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:21:52.558821    4640 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:21:52.572801    4640 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:21:52.669948    4640 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:21:52.772017    4640 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:21:52.772038    4640 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:21:52.785587    4640 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:21:52.887001    4640 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:22:53.782764    4640 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0805 16:22:53.782779    4640 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0805 16:22:53.782788    4640 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.895755367s)
	I0805 16:22:53.782849    4640 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0805 16:22:53.791796    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0805 16:22:53.791808    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578059613Z" level=info msg="Starting up"
	I0805 16:22:53.791820    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578746899Z" level=info msg="containerd not running, starting managed containerd"
	I0805 16:22:53.791833    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.579364099Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	I0805 16:22:53.791843    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.597194743Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0805 16:22:53.791853    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613422882Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0805 16:22:53.791865    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613448264Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0805 16:22:53.791875    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613527396Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0805 16:22:53.791884    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613540484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791897    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613598776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.791906    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613664323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791924    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613844698Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.791936    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613881896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791948    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613894727Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.791957    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613902000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791967    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614005875Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791976    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614259691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.791991    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615867073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.792000    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615974584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:22:53.792024    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616138996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:22:53.792033    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616172823Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0805 16:22:53.792042    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616291383Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0805 16:22:53.792050    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616398312Z" level=info msg="metadata content store policy set" policy=shared
	I0805 16:22:53.792059    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.618998610Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0805 16:22:53.792068    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619065338Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0805 16:22:53.792076    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619081703Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0805 16:22:53.792085    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619092273Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0805 16:22:53.792094    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619101426Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0805 16:22:53.792103    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619164798Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0805 16:22:53.792113    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619370752Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0805 16:22:53.792121    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619460644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0805 16:22:53.792129    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619495461Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0805 16:22:53.792138    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619506581Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0805 16:22:53.792148    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619515758Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792158    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619524383Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792170    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619532546Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792178    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619541391Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792187    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619550990Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792197    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619565508Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792266    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619576616Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792278    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619584035Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0805 16:22:53.792291    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619598072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792299    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619608190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792307    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619616319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792316    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619625389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792326    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619634123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792335    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619648148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792344    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619658942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792353    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619667668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792362    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619676302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792371    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619686416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792380    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619694011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792388    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619701566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792397    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619709342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792406    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619719250Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0805 16:22:53.792415    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619733203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792423    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619741785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792432    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619749153Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0805 16:22:53.792442    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619797467Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0805 16:22:53.792454    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619811479Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0805 16:22:53.792467    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619819137Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0805 16:22:53.792661    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619826861Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0805 16:22:53.792673    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619833500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0805 16:22:53.792682    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619841896Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0805 16:22:53.792690    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619852419Z" level=info msg="NRI interface is disabled by configuration."
	I0805 16:22:53.792702    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620071162Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0805 16:22:53.792710    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620124755Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0805 16:22:53.792718    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620155079Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0805 16:22:53.792725    4640 command_runner.go:130] > Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620168148Z" level=info msg="containerd successfully booted in 0.023750s"
	I0805 16:22:53.792734    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.639692405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0805 16:22:53.792741    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.644102102Z" level=info msg="Loading containers: start."
	I0805 16:22:53.792763    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.740540264Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0805 16:22:53.792774    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.826229634Z" level=info msg="Loading containers: done."
	I0805 16:22:53.792783    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843276878Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	I0805 16:22:53.792792    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843375843Z" level=info msg="Daemon has completed initialization"
	I0805 16:22:53.792800    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869275976Z" level=info msg="API listen on /var/run/docker.sock"
	I0805 16:22:53.792807    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869434474Z" level=info msg="API listen on [::]:2376"
	I0805 16:22:53.792813    4640 command_runner.go:130] > Aug 05 23:21:51 multinode-985000-m02 systemd[1]: Started Docker Application Container Engine.
	I0805 16:22:53.792821    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.919662359Z" level=info msg="Processing signal 'terminated'"
	I0805 16:22:53.792829    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920773928Z" level=info msg="Daemon shutdown complete"
	I0805 16:22:53.792840    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920792538Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0805 16:22:53.792852    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920845272Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0805 16:22:53.792861    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920858866Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0805 16:22:53.792868    4640 command_runner.go:130] > Aug 05 23:21:52 multinode-985000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0805 16:22:53.792874    4640 command_runner.go:130] > Aug 05 23:21:53 multinode-985000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0805 16:22:53.792904    4640 command_runner.go:130] > Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0805 16:22:53.792911    4640 command_runner.go:130] > Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0805 16:22:53.792918    4640 command_runner.go:130] > Aug 05 23:21:53 multinode-985000-m02 dockerd[923]: time="2024-08-05T23:21:53.957339969Z" level=info msg="Starting up"
	I0805 16:22:53.792929    4640 command_runner.go:130] > Aug 05 23:22:53 multinode-985000-m02 dockerd[923]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0805 16:22:53.792940    4640 command_runner.go:130] > Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0805 16:22:53.792946    4640 command_runner.go:130] > Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0805 16:22:53.792952    4640 command_runner.go:130] > Aug 05 23:22:53 multinode-985000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0805 16:22:53.817223    4640 out.go:177] 
	W0805 16:22:53.838182    4640 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 05 23:21:50 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578059613Z" level=info msg="Starting up"
	Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.578746899Z" level=info msg="containerd not running, starting managed containerd"
	Aug 05 23:21:50 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:50.579364099Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.597194743Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613422882Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613448264Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613527396Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613540484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613598776Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613664323Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613844698Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613881896Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613894727Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.613902000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614005875Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.614259691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615867073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.615974584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616138996Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616172823Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616291383Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.616398312Z" level=info msg="metadata content store policy set" policy=shared
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.618998610Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619065338Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619081703Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619092273Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619101426Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619164798Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619370752Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619460644Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619495461Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619506581Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619515758Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619524383Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619532546Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619541391Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619550990Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619565508Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619576616Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619584035Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619598072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619608190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619616319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619625389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619634123Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619648148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619658942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619667668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619676302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619686416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619694011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619701566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619709342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619719250Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619733203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619741785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619749153Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619797467Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619811479Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619819137Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619826861Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619833500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619841896Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.619852419Z" level=info msg="NRI interface is disabled by configuration."
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620071162Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620124755Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620155079Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 05 23:21:50 multinode-985000-m02 dockerd[521]: time="2024-08-05T23:21:50.620168148Z" level=info msg="containerd successfully booted in 0.023750s"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.639692405Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.644102102Z" level=info msg="Loading containers: start."
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.740540264Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.826229634Z" level=info msg="Loading containers: done."
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843276878Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.843375843Z" level=info msg="Daemon has completed initialization"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869275976Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 05 23:21:51 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:51.869434474Z" level=info msg="API listen on [::]:2376"
	Aug 05 23:21:51 multinode-985000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.919662359Z" level=info msg="Processing signal 'terminated'"
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920773928Z" level=info msg="Daemon shutdown complete"
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920792538Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920845272Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 05 23:21:52 multinode-985000-m02 dockerd[514]: time="2024-08-05T23:21:52.920858866Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 05 23:21:52 multinode-985000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 05 23:21:53 multinode-985000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 05 23:21:53 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:21:53 multinode-985000-m02 dockerd[923]: time="2024-08-05T23:21:53.957339969Z" level=info msg="Starting up"
	Aug 05 23:22:53 multinode-985000-m02 dockerd[923]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 05 23:22:53 multinode-985000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 05 23:22:53 multinode-985000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0805 16:22:53.838301    4640 out.go:239] * 
	W0805 16:22:53.839537    4640 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:22:53.901092    4640 out.go:177] 
	
	
	==> Docker <==
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.538240622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.545949341Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.546006859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.546094356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.546213245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 cri-dockerd[1167]: time="2024-08-05T23:21:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2a8cd74365e92f179bb6ee1ce28c9364c192d2bf64c54e8b18c5339cfbdf5dcd/resolv.conf as [nameserver 192.169.0.1]"
	Aug 05 23:21:36 multinode-985000 cri-dockerd[1167]: time="2024-08-05T23:21:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/35b9ac42edc06af57c697463456d60a00f8d9d12849ef967af1e639bc238e3b3/resolv.conf as [nameserver 192.169.0.1]"
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.715025205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.715620680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.716022138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.717088853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.755323726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.755409641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.755418837Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:21:36 multinode-985000 dockerd[1273]: time="2024-08-05T23:21:36.764703174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:22:57 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:57.493861515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:22:57 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:57.493963422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:22:57 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:57.494329548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:22:57 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:57.494770138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:22:57 multinode-985000 cri-dockerd[1167]: time="2024-08-05T23:22:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/abfb33d4f204dd0b2a7ffc533336cce5539144674b64125ac7373b0be8961559/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 05 23:22:58 multinode-985000 cri-dockerd[1167]: time="2024-08-05T23:22:58Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Aug 05 23:22:58 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:58.841390849Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:22:58 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:58.841491056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:22:58 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:58.841532145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:22:58 multinode-985000 dockerd[1273]: time="2024-08-05T23:22:58.841640743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0cbc162071e51       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   13 minutes ago      Running             busybox                   0                   abfb33d4f204d       busybox-fc5497c4f-44k5g
	c9365aec33892       cbb01a7bd410d                                                                                         15 minutes ago      Running             coredns                   0                   35b9ac42edc06       coredns-7db6d8ff4d-fqtll
	3d9fd612d0b14       6e38f40d628db                                                                                         15 minutes ago      Running             storage-provisioner       0                   2a8cd74365e92       storage-provisioner
	724e5cfab0a27       kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3              15 minutes ago      Running             kindnet-cni               0                   65a1122097f07       kindnet-tvtvg
	d58ca48f9f8b2       55bb025d2cfa5                                                                                         15 minutes ago      Running             kube-proxy                0                   c91338eb0e138       kube-proxy-fwgw7
	792feba1a6f6b       3edc18e7b7672                                                                                         15 minutes ago      Running             kube-scheduler            0                   c86e04eb7823b       kube-scheduler-multinode-985000
	1fdd85b796ab3       3861cfcd7c04c                                                                                         15 minutes ago      Running             etcd                      0                   b58900db52990       etcd-multinode-985000
	d11865076c645       76932a3b37d7e                                                                                         15 minutes ago      Running             kube-controller-manager   0                   55a20063845e3       kube-controller-manager-multinode-985000
	608878b33f358       1f6d574d502f3                                                                                         15 minutes ago      Running             kube-apiserver            0                   569788c2699f1       kube-apiserver-multinode-985000
	
	
	==> coredns [c9365aec3389] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57821 - 19682 "HINFO IN 7732396596932693360.4385804994640298901. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.014623104s
	[INFO] 10.244.0.3:44234 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136193s
	[INFO] 10.244.0.3:37423 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.058799401s
	[INFO] 10.244.0.3:57961 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.010090318s
	[INFO] 10.244.0.3:37799 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.012765436s
	[INFO] 10.244.0.3:46499 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000078364s
	[INFO] 10.244.0.3:42436 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011216992s
	[INFO] 10.244.0.3:35880 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000144767s
	[INFO] 10.244.0.3:39224 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104006s
	[INFO] 10.244.0.3:48536 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.013324615s
	[INFO] 10.244.0.3:55841 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000221823s
	[INFO] 10.244.0.3:46712 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000111417s
	[INFO] 10.244.0.3:51982 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099744s
	[INFO] 10.244.0.3:55425 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080184s
	[INFO] 10.244.0.3:58084 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119904s
	[INFO] 10.244.0.3:57892 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049065s
	[INFO] 10.244.0.3:52329 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000049128s
	[INFO] 10.244.0.3:60384 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000083319s
	[INFO] 10.244.0.3:51923 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000058598s
	[INFO] 10.244.0.3:37985 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00007256s
	[INFO] 10.244.0.3:45792 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000071025s
	
	
	==> describe nodes <==
	Name:               multinode-985000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-985000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=multinode-985000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T16_21_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:21:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-985000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:36:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:33:23 +0000   Mon, 05 Aug 2024 23:21:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:33:23 +0000   Mon, 05 Aug 2024 23:21:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:33:23 +0000   Mon, 05 Aug 2024 23:21:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:33:23 +0000   Mon, 05 Aug 2024 23:21:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.13
	  Hostname:    multinode-985000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 43d0d80c8ac846e58ac4351481e2a76f
	  System UUID:                3ac6443b-0000-0000-898d-9b152fa64288
	  Boot ID:                    382df761-aca3-4a9d-bdce-655bf0444398
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-44k5g                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-7db6d8ff4d-fqtll                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-multinode-985000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-tvtvg                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-multinode-985000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-multinode-985000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-fwgw7                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-multinode-985000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node multinode-985000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node multinode-985000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node multinode-985000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node multinode-985000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node multinode-985000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node multinode-985000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           15m                node-controller  Node multinode-985000 event: Registered Node multinode-985000 in Controller
	  Normal  NodeReady                15m                kubelet          Node multinode-985000 status is now: NodeReady
	
	
	Name:               multinode-985000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-985000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=multinode-985000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T16_35_55_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:35:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-985000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:36:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:36:09 +0000   Mon, 05 Aug 2024 23:35:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:36:09 +0000   Mon, 05 Aug 2024 23:35:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:36:09 +0000   Mon, 05 Aug 2024 23:35:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:36:09 +0000   Mon, 05 Aug 2024 23:36:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.15
	  Hostname:    multinode-985000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 de33b8a09ea841548571815588d91336
	  System UUID:                f79c425f-0000-0000-b959-1b18fd31916b
	  Boot ID:                    a263d4fd-5a9a-4e6d-b9a5-6d8b00715c16
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-p2wf9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         62s
	  kube-system                 kindnet-5kfjr              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m4s
	  kube-system                 kube-proxy-s65dd           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 56s                  kube-proxy       
	  Normal  Starting                 118s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  2m5s (x2 over 2m5s)  kubelet          Node multinode-985000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s (x2 over 2m5s)  kubelet          Node multinode-985000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s (x2 over 2m5s)  kubelet          Node multinode-985000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                102s                 kubelet          Node multinode-985000-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  59s (x2 over 59s)    kubelet          Node multinode-985000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x2 over 59s)    kubelet          Node multinode-985000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x2 over 59s)    kubelet          Node multinode-985000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           54s                  node-controller  Node multinode-985000-m03 event: Registered Node multinode-985000-m03 in Controller
	  Normal  NodeReady                44s                  kubelet          Node multinode-985000-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.261909] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000004] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.788416] systemd-fstab-generator[490]: Ignoring "noauto" option for root device
	[  +0.099076] systemd-fstab-generator[502]: Ignoring "noauto" option for root device
	[  +1.730104] systemd-fstab-generator[841]: Ignoring "noauto" option for root device
	[  +0.293514] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.050985] kauditd_printk_skb: 95 callbacks suppressed
	[  +0.056812] systemd-fstab-generator[892]: Ignoring "noauto" option for root device
	[  +0.126132] systemd-fstab-generator[906]: Ignoring "noauto" option for root device
	[  +2.458612] systemd-fstab-generator[1120]: Ignoring "noauto" option for root device
	[  +0.104830] systemd-fstab-generator[1132]: Ignoring "noauto" option for root device
	[  +0.110549] systemd-fstab-generator[1144]: Ignoring "noauto" option for root device
	[  +0.128910] systemd-fstab-generator[1159]: Ignoring "noauto" option for root device
	[  +3.841948] systemd-fstab-generator[1259]: Ignoring "noauto" option for root device
	[  +0.049995] kauditd_printk_skb: 180 callbacks suppressed
	[  +2.575866] systemd-fstab-generator[1508]: Ignoring "noauto" option for root device
	[  +3.513702] systemd-fstab-generator[1689]: Ignoring "noauto" option for root device
	[  +0.052965] kauditd_printk_skb: 70 callbacks suppressed
	[Aug 5 23:21] systemd-fstab-generator[2095]: Ignoring "noauto" option for root device
	[  +0.093506] kauditd_printk_skb: 62 callbacks suppressed
	[ +13.997559] systemd-fstab-generator[2287]: Ignoring "noauto" option for root device
	[  +0.103967] kauditd_printk_skb: 12 callbacks suppressed
	[ +16.210215] kauditd_printk_skb: 60 callbacks suppressed
	[Aug 5 23:22] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [1fdd85b796ab] <==
	{"level":"info","ts":"2024-08-05T23:21:02.190761Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","added-peer-id":"e0290fa3161c5471","added-peer-peer-urls":["https://192.169.0.13:2380"]}
	{"level":"info","ts":"2024-08-05T23:21:02.845352Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-05T23:21:02.84543Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-05T23:21:02.845462Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgPreVoteResp from e0290fa3161c5471 at term 1"}
	{"level":"info","ts":"2024-08-05T23:21:02.845512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became candidate at term 2"}
	{"level":"info","ts":"2024-08-05T23:21:02.845532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgVoteResp from e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-08-05T23:21:02.845548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became leader at term 2"}
	{"level":"info","ts":"2024-08-05T23:21:02.845562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e0290fa3161c5471 elected leader e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-08-05T23:21:02.849595Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.851787Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e0290fa3161c5471","local-member-attributes":"{Name:multinode-985000 ClientURLs:[https://192.169.0.13:2379]}","request-path":"/0/members/e0290fa3161c5471/attributes","cluster-id":"87b46e718846f146","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T23:21:02.852037Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:21:02.855611Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	{"level":"info","ts":"2024-08-05T23:21:02.856003Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.856059Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.85615Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.863221Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:21:02.86336Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T23:21:02.863406Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T23:21:02.864495Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-05T23:31:02.914901Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":684}
	{"level":"info","ts":"2024-08-05T23:31:02.918154Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":684,"took":"2.558785ms","hash":2682644219,"current-db-size-bytes":2088960,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2088960,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-08-05T23:31:02.918199Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2682644219,"revision":684,"compact-revision":-1}
	{"level":"info","ts":"2024-08-05T23:36:02.919565Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":925}
	{"level":"info","ts":"2024-08-05T23:36:02.920973Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":925,"took":"1.036284ms","hash":3918561037,"current-db-size-bytes":2088960,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1814528,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-08-05T23:36:02.921075Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3918561037,"revision":925,"compact-revision":684}
	
	
	==> kernel <==
	 23:36:53 up 16 min,  0 users,  load average: 0.14, 0.15, 0.10
	Linux multinode-985000 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [724e5cfab0a2] <==
	I0805 23:35:44.988727       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:35:44.988731       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.1.0/24] 
	I0805 23:35:54.988688       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:35:54.988911       1 main.go:299] handling current node
	I0805 23:36:04.991069       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:36:04.991515       1 main.go:299] handling current node
	I0805 23:36:04.991590       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:36:04.991736       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:36:04.991992       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.169.0.15 Flags: [] Table: 0} 
	I0805 23:36:14.989579       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:36:14.989997       1 main.go:299] handling current node
	I0805 23:36:14.990198       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:36:14.990433       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:36:24.988684       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:36:24.988821       1 main.go:299] handling current node
	I0805 23:36:24.988872       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:36:24.988911       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:36:34.988817       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:36:34.988909       1 main.go:299] handling current node
	I0805 23:36:34.988935       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:36:34.988949       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:36:44.992669       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:36:44.992745       1 main.go:299] handling current node
	I0805 23:36:44.992779       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:36:44.992802       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [608878b33f35] <==
	I0805 23:21:04.097032       1 aggregator.go:165] initial CRD sync complete...
	I0805 23:21:04.097038       1 autoregister_controller.go:141] Starting autoregister controller
	I0805 23:21:04.097041       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0805 23:21:04.097046       1 cache.go:39] Caches are synced for autoregister controller
	I0805 23:21:04.110976       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0805 23:21:04.964782       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0805 23:21:04.969492       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0805 23:21:04.969592       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0805 23:21:05.293407       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0805 23:21:05.318630       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0805 23:21:05.372930       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0805 23:21:05.377089       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.13]
	I0805 23:21:05.377814       1 controller.go:615] quota admission added evaluator for: endpoints
	I0805 23:21:05.381896       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0805 23:21:06.014220       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0805 23:21:06.529594       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0805 23:21:06.534785       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0805 23:21:06.541889       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0805 23:21:20.069451       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0805 23:21:20.168118       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0805 23:34:22.712021       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52583: use of closed network connection
	E0805 23:34:23.040370       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52588: use of closed network connection
	E0805 23:34:23.352264       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52593: use of closed network connection
	E0805 23:34:26.444399       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52624: use of closed network connection
	E0805 23:34:26.631411       1 conn.go:339] Error on socket receive: read tcp 192.169.0.13:8443->192.169.0.1:52626: use of closed network connection
	
	
	==> kube-controller-manager [d11865076c64] <==
	I0805 23:22:59.132399       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.529µs"
	I0805 23:34:49.118620       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-985000-m03\" does not exist"
	I0805 23:34:49.123685       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-985000-m03" podCIDRs=["10.244.1.0/24"]
	I0805 23:34:49.553799       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-985000-m03"
	I0805 23:35:12.244278       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-985000-m03"
	I0805 23:35:12.252224       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.969µs"
	I0805 23:35:12.259725       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.754µs"
	I0805 23:35:14.267796       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.716009ms"
	I0805 23:35:14.267862       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.069µs"
	I0805 23:35:51.179064       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.106041ms"
	I0805 23:35:51.195857       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.438177ms"
	I0805 23:35:51.211043       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.139069ms"
	I0805 23:35:51.211379       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="291.666µs"
	I0805 23:35:55.268521       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-985000-m03\" does not exist"
	I0805 23:35:55.272637       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-985000-m03" podCIDRs=["10.244.2.0/24"]
	I0805 23:35:57.161739       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.697µs"
	I0805 23:36:10.485777       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-985000-m03"
	I0805 23:36:10.496807       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="88.532µs"
	I0805 23:36:19.181053       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.67µs"
	I0805 23:36:19.184540       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.764µs"
	I0805 23:36:19.191433       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.037µs"
	I0805 23:36:19.365196       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.813µs"
	I0805 23:36:19.367176       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.532µs"
	I0805 23:36:20.387745       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.044943ms"
	I0805 23:36:20.388000       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.528µs"
	
	
	==> kube-proxy [d58ca48f9f8b] <==
	I0805 23:21:21.029929       1 server_linux.go:69] "Using iptables proxy"
	I0805 23:21:21.072929       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.13"]
	I0805 23:21:21.105532       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 23:21:21.105552       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 23:21:21.105563       1 server_linux.go:165] "Using iptables Proxier"
	I0805 23:21:21.107493       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 23:21:21.107594       1 server.go:872] "Version info" version="v1.30.3"
	I0805 23:21:21.107602       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:21:21.108477       1 config.go:192] "Starting service config controller"
	I0805 23:21:21.108482       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 23:21:21.108492       1 config.go:101] "Starting endpoint slice config controller"
	I0805 23:21:21.108494       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 23:21:21.108784       1 config.go:319] "Starting node config controller"
	I0805 23:21:21.108789       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 23:21:21.209420       1 shared_informer.go:320] Caches are synced for node config
	I0805 23:21:21.209474       1 shared_informer.go:320] Caches are synced for service config
	I0805 23:21:21.209501       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [792feba1a6f6] <==
	E0805 23:21:04.024310       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:04.024229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0805 23:21:04.024017       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 23:21:04.024329       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 23:21:04.024047       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 23:21:04.024362       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0805 23:21:04.024118       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 23:21:04.024431       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0805 23:21:04.860871       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:04.861069       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0805 23:21:04.959895       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0805 23:21:04.959949       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0805 23:21:04.962444       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 23:21:04.962496       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0805 23:21:04.968410       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 23:21:04.968452       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 23:21:05.030527       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0805 23:21:05.030566       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 23:21:05.076451       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:05.076659       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0805 23:21:05.118159       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:05.118676       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0805 23:21:05.141945       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 23:21:05.142020       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0805 23:21:08.218627       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 05 23:32:06 multinode-985000 kubelet[2102]: E0805 23:32:06.388091    2102 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:32:06 multinode-985000 kubelet[2102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:32:06 multinode-985000 kubelet[2102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:32:06 multinode-985000 kubelet[2102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:32:06 multinode-985000 kubelet[2102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:33:06 multinode-985000 kubelet[2102]: E0805 23:33:06.388876    2102 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:33:06 multinode-985000 kubelet[2102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:33:06 multinode-985000 kubelet[2102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:33:06 multinode-985000 kubelet[2102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:33:06 multinode-985000 kubelet[2102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:34:06 multinode-985000 kubelet[2102]: E0805 23:34:06.388016    2102 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:34:06 multinode-985000 kubelet[2102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:34:06 multinode-985000 kubelet[2102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:34:06 multinode-985000 kubelet[2102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:34:06 multinode-985000 kubelet[2102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:35:06 multinode-985000 kubelet[2102]: E0805 23:35:06.389737    2102 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:35:06 multinode-985000 kubelet[2102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:35:06 multinode-985000 kubelet[2102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:35:06 multinode-985000 kubelet[2102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:35:06 multinode-985000 kubelet[2102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:36:06 multinode-985000 kubelet[2102]: E0805 23:36:06.388843    2102 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:36:06 multinode-985000 kubelet[2102]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:36:06 multinode-985000 kubelet[2102]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:36:06 multinode-985000 kubelet[2102]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:36:06 multinode-985000 kubelet[2102]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-985000 -n multinode-985000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-985000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StartAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StartAfterStop (83.24s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (146.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-985000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-985000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-985000: (24.831759427s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-985000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-985000 --wait=true -v=8 --alsologtostderr: exit status 90 (1m57.53018208s)

                                                
                                                
-- stdout --
	* [multinode-985000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "multinode-985000" primary control-plane node in "multinode-985000" cluster
	* Restarting existing hyperkit VM for "multinode-985000" ...
	* Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-985000-m02" worker node in "multinode-985000" cluster
	* Restarting existing hyperkit VM for "multinode-985000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.13
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:37:19.344110    5521 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:37:19.344466    5521 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:37:19.344474    5521 out.go:304] Setting ErrFile to fd 2...
	I0805 16:37:19.344479    5521 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:37:19.344702    5521 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:37:19.346290    5521 out.go:298] Setting JSON to false
	I0805 16:37:19.368484    5521 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":4010,"bootTime":1722897029,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0805 16:37:19.368574    5521 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:37:19.390244    5521 out.go:177] * [multinode-985000] minikube v1.33.1 on Darwin 14.5
	I0805 16:37:19.432083    5521 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:37:19.432145    5521 notify.go:220] Checking for updates...
	I0805 16:37:19.474965    5521 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:37:19.495989    5521 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0805 16:37:19.517187    5521 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:37:19.537983    5521 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:37:19.558962    5521 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:37:19.580823    5521 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:37:19.580992    5521 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:37:19.581649    5521 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:37:19.581721    5521 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:37:19.591086    5521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53115
	I0805 16:37:19.591452    5521 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:37:19.591907    5521 main.go:141] libmachine: Using API Version  1
	I0805 16:37:19.591915    5521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:37:19.592186    5521 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:37:19.592316    5521 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:37:19.621203    5521 out.go:177] * Using the hyperkit driver based on existing profile
	I0805 16:37:19.663060    5521 start.go:297] selected driver: hyperkit
	I0805 16:37:19.663084    5521 start.go:901] validating driver "hyperkit" against &{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.15 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:37:19.663335    5521 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:37:19.663521    5521 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:37:19.663719    5521 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19373-1122/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0805 16:37:19.672949    5521 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0805 16:37:19.676917    5521 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:37:19.676939    5521 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0805 16:37:19.679650    5521 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:37:19.679719    5521 cni.go:84] Creating CNI manager for ""
	I0805 16:37:19.679731    5521 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0805 16:37:19.679807    5521 start.go:340] cluster config:
	{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.15 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:
false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:37:19.679904    5521 iso.go:125] acquiring lock: {Name:mk71e8d40232ece83c91dc82184f03ab93aee56e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:37:19.721789    5521 out.go:177] * Starting "multinode-985000" primary control-plane node in "multinode-985000" cluster
	I0805 16:37:19.742954    5521 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:37:19.743026    5521 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0805 16:37:19.743048    5521 cache.go:56] Caching tarball of preloaded images
	I0805 16:37:19.743247    5521 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:37:19.743265    5521 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:37:19.743456    5521 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:37:19.744298    5521 start.go:360] acquireMachinesLock for multinode-985000: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:37:19.744469    5521 start.go:364] duration metric: took 148.41µs to acquireMachinesLock for "multinode-985000"
	I0805 16:37:19.744508    5521 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:37:19.744520    5521 fix.go:54] fixHost starting: 
	I0805 16:37:19.744954    5521 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:37:19.744979    5521 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:37:19.753692    5521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53117
	I0805 16:37:19.754053    5521 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:37:19.754374    5521 main.go:141] libmachine: Using API Version  1
	I0805 16:37:19.754383    5521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:37:19.754660    5521 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:37:19.754807    5521 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:37:19.754921    5521 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:37:19.755005    5521 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:37:19.755109    5521 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:37:19.755997    5521 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid 4651 missing from process table
	I0805 16:37:19.756024    5521 fix.go:112] recreateIfNeeded on multinode-985000: state=Stopped err=<nil>
	I0805 16:37:19.756039    5521 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	W0805 16:37:19.756134    5521 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:37:19.797962    5521 out.go:177] * Restarting existing hyperkit VM for "multinode-985000" ...
	I0805 16:37:19.821296    5521 main.go:141] libmachine: (multinode-985000) Calling .Start
	I0805 16:37:19.821573    5521 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:37:19.821663    5521 main.go:141] libmachine: (multinode-985000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid
	I0805 16:37:19.823405    5521 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid 4651 missing from process table
	I0805 16:37:19.823427    5521 main.go:141] libmachine: (multinode-985000) DBG | pid 4651 is in state "Stopped"
	I0805 16:37:19.823442    5521 main.go:141] libmachine: (multinode-985000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid...
	I0805 16:37:19.823689    5521 main.go:141] libmachine: (multinode-985000) DBG | Using UUID 3ac698fc-f622-443b-898d-9b152fa64288
	I0805 16:37:19.935040    5521 main.go:141] libmachine: (multinode-985000) DBG | Generated MAC e2:6:14:d2:13:ae
	I0805 16:37:19.935070    5521 main.go:141] libmachine: (multinode-985000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000
	I0805 16:37:19.935187    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3ac698fc-f622-443b-898d-9b152fa64288", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a67e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0805 16:37:19.935220    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3ac698fc-f622-443b-898d-9b152fa64288", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a67e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0805 16:37:19.935274    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "3ac698fc-f622-443b-898d-9b152fa64288", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/multinode-985000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage,/Users/jenkins/minikube-integration/1937
3-1122/.minikube/machines/multinode-985000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"}
	I0805 16:37:19.935303    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 3ac698fc-f622-443b-898d-9b152fa64288 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/multinode-985000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"
	I0805 16:37:19.935323    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:37:19.936734    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 DEBUG: hyperkit: Pid is 5533
	I0805 16:37:19.937092    5521 main.go:141] libmachine: (multinode-985000) DBG | Attempt 0
	I0805 16:37:19.937106    5521 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:37:19.937205    5521 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 5533
	I0805 16:37:19.939053    5521 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:37:19.939115    5521 main.go:141] libmachine: (multinode-985000) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I0805 16:37:19.939146    5521 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:37:19.939167    5521 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b00c}
	I0805 16:37:19.939179    5521 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:37:19.939190    5521 main.go:141] libmachine: (multinode-985000) DBG | Found match: e2:6:14:d2:13:ae
	I0805 16:37:19.939202    5521 main.go:141] libmachine: (multinode-985000) DBG | IP: 192.169.0.13
	I0805 16:37:19.939251    5521 main.go:141] libmachine: (multinode-985000) Calling .GetConfigRaw
	I0805 16:37:19.939918    5521 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:37:19.940105    5521 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:37:19.940507    5521 machine.go:94] provisionDockerMachine start ...
	I0805 16:37:19.940521    5521 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:37:19.940712    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:19.940833    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:19.940944    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:19.941063    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:19.941184    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:19.941317    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:37:19.941534    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:37:19.941543    5521 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 16:37:19.945439    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:37:19.998236    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:37:19.999189    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:37:19.999209    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:37:19.999217    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:37:19.999225    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:37:20.381357    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:37:20.381372    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:37:20.495827    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:37:20.495847    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:37:20.495864    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:37:20.495880    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:37:20.496727    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:37:20.496740    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:37:26.053033    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:26 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0805 16:37:26.053095    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:26 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0805 16:37:26.053106    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:26 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0805 16:37:26.078427    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:26 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0805 16:37:31.014343    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 16:37:31.014358    5521 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:37:31.014500    5521 buildroot.go:166] provisioning hostname "multinode-985000"
	I0805 16:37:31.014511    5521 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:37:31.014618    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:31.014720    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:31.014844    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.014943    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.015061    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:31.015194    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:37:31.015348    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:37:31.015359    5521 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-985000 && echo "multinode-985000" | sudo tee /etc/hostname
	I0805 16:37:31.093711    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-985000
	
	I0805 16:37:31.093738    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:31.093873    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:31.093973    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.094065    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.094154    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:31.094291    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:37:31.094436    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:37:31.094447    5521 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-985000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-985000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-985000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:37:31.166381    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:37:31.166401    5521 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:37:31.166420    5521 buildroot.go:174] setting up certificates
	I0805 16:37:31.166425    5521 provision.go:84] configureAuth start
	I0805 16:37:31.166432    5521 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:37:31.166566    5521 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:37:31.166671    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:31.166751    5521 provision.go:143] copyHostCerts
	I0805 16:37:31.166779    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:37:31.166848    5521 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:37:31.166856    5521 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:37:31.167016    5521 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:37:31.167224    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:37:31.167266    5521 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:37:31.167271    5521 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:37:31.167361    5521 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:37:31.167503    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:37:31.167542    5521 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:37:31.167553    5521 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:37:31.167640    5521 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:37:31.167799    5521 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.multinode-985000 san=[127.0.0.1 192.169.0.13 localhost minikube multinode-985000]
	I0805 16:37:31.333929    5521 provision.go:177] copyRemoteCerts
	I0805 16:37:31.333986    5521 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:37:31.334003    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:31.334141    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:31.334246    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.334341    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:31.334442    5521 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:37:31.373502    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:37:31.373592    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:37:31.393275    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:37:31.393333    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0805 16:37:31.412894    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:37:31.412951    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:37:31.432545    5521 provision.go:87] duration metric: took 266.106701ms to configureAuth
	I0805 16:37:31.432558    5521 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:37:31.432725    5521 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:37:31.432742    5521 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:37:31.432881    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:31.432989    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:31.433084    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.433176    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.433269    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:31.433395    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:37:31.433519    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:37:31.433527    5521 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:37:31.498617    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:37:31.498629    5521 buildroot.go:70] root file system type: tmpfs
	I0805 16:37:31.498708    5521 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:37:31.498721    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:31.498863    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:31.498974    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.499071    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.499155    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:31.499273    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:37:31.499401    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:37:31.499448    5521 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:37:31.575743    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:37:31.575771    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:31.575913    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:31.576016    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.576109    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.576205    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:31.576341    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:37:31.576481    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:37:31.576493    5521 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:37:33.234695    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:37:33.234711    5521 machine.go:97] duration metric: took 13.294178335s to provisionDockerMachine
	I0805 16:37:33.234727    5521 start.go:293] postStartSetup for "multinode-985000" (driver="hyperkit")
	I0805 16:37:33.234735    5521 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:37:33.234747    5521 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:37:33.234933    5521 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:37:33.234947    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:33.235048    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:33.235138    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:33.235219    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:33.235304    5521 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:37:33.276364    5521 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:37:33.279613    5521 command_runner.go:130] > NAME=Buildroot
	I0805 16:37:33.279624    5521 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0805 16:37:33.279629    5521 command_runner.go:130] > ID=buildroot
	I0805 16:37:33.279635    5521 command_runner.go:130] > VERSION_ID=2023.02.9
	I0805 16:37:33.279641    5521 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0805 16:37:33.279904    5521 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:37:33.279915    5521 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:37:33.280022    5521 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:37:33.280208    5521 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:37:33.280215    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:37:33.280420    5521 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:37:33.289381    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:37:33.319551    5521 start.go:296] duration metric: took 84.814531ms for postStartSetup
	I0805 16:37:33.319580    5521 fix.go:56] duration metric: took 13.575045291s for fixHost
	I0805 16:37:33.319592    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:33.319764    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:33.319879    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:33.319970    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:33.320074    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:33.320209    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:37:33.320347    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:37:33.320353    5521 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0805 16:37:33.386078    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722901053.539565012
	
	I0805 16:37:33.386090    5521 fix.go:216] guest clock: 1722901053.539565012
	I0805 16:37:33.386095    5521 fix.go:229] Guest: 2024-08-05 16:37:33.539565012 -0700 PDT Remote: 2024-08-05 16:37:33.319583 -0700 PDT m=+14.014329761 (delta=219.982012ms)
	I0805 16:37:33.386114    5521 fix.go:200] guest clock delta is within tolerance: 219.982012ms
	I0805 16:37:33.386118    5521 start.go:83] releasing machines lock for "multinode-985000", held for 13.641620815s
	I0805 16:37:33.386138    5521 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:37:33.386279    5521 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:37:33.386394    5521 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:37:33.386730    5521 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:37:33.386845    5521 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:37:33.386917    5521 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:37:33.386942    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:33.387003    5521 ssh_runner.go:195] Run: cat /version.json
	I0805 16:37:33.387017    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:33.387030    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:33.387128    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:33.387144    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:33.387234    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:33.387245    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:33.387325    5521 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:37:33.387345    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:33.387431    5521 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:37:33.421764    5521 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0805 16:37:33.421883    5521 ssh_runner.go:195] Run: systemctl --version
	I0805 16:37:33.467550    5521 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0805 16:37:33.468651    5521 command_runner.go:130] > systemd 252 (252)
	I0805 16:37:33.468690    5521 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0805 16:37:33.468805    5521 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 16:37:33.473715    5521 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0805 16:37:33.473736    5521 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:37:33.473771    5521 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:37:33.487255    5521 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0805 16:37:33.487298    5521 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:37:33.487311    5521 start.go:495] detecting cgroup driver to use...
	I0805 16:37:33.487409    5521 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:37:33.501851    5521 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0805 16:37:33.502107    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:37:33.510909    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:37:33.519656    5521 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:37:33.519696    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:37:33.528321    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:37:33.536918    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:37:33.545942    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:37:33.554600    5521 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:37:33.563425    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:37:33.572074    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:37:33.580764    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:37:33.589491    5521 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:37:33.597187    5521 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0805 16:37:33.597327    5521 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:37:33.605146    5521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:37:33.699080    5521 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:37:33.715293    5521 start.go:495] detecting cgroup driver to use...
	I0805 16:37:33.715372    5521 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:37:33.725461    5521 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0805 16:37:33.725955    5521 command_runner.go:130] > [Unit]
	I0805 16:37:33.725965    5521 command_runner.go:130] > Description=Docker Application Container Engine
	I0805 16:37:33.725969    5521 command_runner.go:130] > Documentation=https://docs.docker.com
	I0805 16:37:33.725974    5521 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0805 16:37:33.725979    5521 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0805 16:37:33.725989    5521 command_runner.go:130] > StartLimitBurst=3
	I0805 16:37:33.725993    5521 command_runner.go:130] > StartLimitIntervalSec=60
	I0805 16:37:33.725997    5521 command_runner.go:130] > [Service]
	I0805 16:37:33.726001    5521 command_runner.go:130] > Type=notify
	I0805 16:37:33.726005    5521 command_runner.go:130] > Restart=on-failure
	I0805 16:37:33.726011    5521 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0805 16:37:33.726019    5521 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0805 16:37:33.726025    5521 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0805 16:37:33.726031    5521 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0805 16:37:33.726036    5521 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0805 16:37:33.726042    5521 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0805 16:37:33.726048    5521 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0805 16:37:33.726063    5521 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0805 16:37:33.726069    5521 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0805 16:37:33.726075    5521 command_runner.go:130] > ExecStart=
	I0805 16:37:33.726090    5521 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0805 16:37:33.726094    5521 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0805 16:37:33.726100    5521 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0805 16:37:33.726107    5521 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0805 16:37:33.726111    5521 command_runner.go:130] > LimitNOFILE=infinity
	I0805 16:37:33.726115    5521 command_runner.go:130] > LimitNPROC=infinity
	I0805 16:37:33.726121    5521 command_runner.go:130] > LimitCORE=infinity
	I0805 16:37:33.726127    5521 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0805 16:37:33.726132    5521 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0805 16:37:33.726137    5521 command_runner.go:130] > TasksMax=infinity
	I0805 16:37:33.726141    5521 command_runner.go:130] > TimeoutStartSec=0
	I0805 16:37:33.726158    5521 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0805 16:37:33.726161    5521 command_runner.go:130] > Delegate=yes
	I0805 16:37:33.726166    5521 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0805 16:37:33.726170    5521 command_runner.go:130] > KillMode=process
	I0805 16:37:33.726173    5521 command_runner.go:130] > [Install]
	I0805 16:37:33.726181    5521 command_runner.go:130] > WantedBy=multi-user.target
	I0805 16:37:33.726297    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:37:33.737088    5521 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:37:33.751275    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:37:33.762646    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:37:33.773482    5521 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:37:33.799587    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:37:33.810018    5521 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:37:33.824851    5521 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0805 16:37:33.825036    5521 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:37:33.828060    5521 command_runner.go:130] > /usr/bin/cri-dockerd
	I0805 16:37:33.828191    5521 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:37:33.835356    5521 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:37:33.848939    5521 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:37:33.941490    5521 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:37:34.038935    5521 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:37:34.039041    5521 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:37:34.053894    5521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:37:34.163116    5521 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:37:36.488671    5521 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.32553387s)
	I0805 16:37:36.488731    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0805 16:37:36.499891    5521 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0805 16:37:36.512512    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:37:36.522638    5521 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0805 16:37:36.618869    5521 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0805 16:37:36.714175    5521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:37:36.811543    5521 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0805 16:37:36.825669    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:37:36.836762    5521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:37:36.945275    5521 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0805 16:37:37.004002    5521 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0805 16:37:37.004108    5521 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0805 16:37:37.008235    5521 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0805 16:37:37.008254    5521 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0805 16:37:37.008260    5521 command_runner.go:130] > Device: 0,22	Inode: 751         Links: 1
	I0805 16:37:37.008265    5521 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0805 16:37:37.008270    5521 command_runner.go:130] > Access: 2024-08-05 23:37:37.112441730 +0000
	I0805 16:37:37.008274    5521 command_runner.go:130] > Modify: 2024-08-05 23:37:37.112441730 +0000
	I0805 16:37:37.008280    5521 command_runner.go:130] > Change: 2024-08-05 23:37:37.113441659 +0000
	I0805 16:37:37.008283    5521 command_runner.go:130] >  Birth: -
	I0805 16:37:37.008458    5521 start.go:563] Will wait 60s for crictl version
	I0805 16:37:37.008503    5521 ssh_runner.go:195] Run: which crictl
	I0805 16:37:37.011447    5521 command_runner.go:130] > /usr/bin/crictl
	I0805 16:37:37.011673    5521 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 16:37:37.037547    5521 command_runner.go:130] > Version:  0.1.0
	I0805 16:37:37.037560    5521 command_runner.go:130] > RuntimeName:  docker
	I0805 16:37:37.037564    5521 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0805 16:37:37.037568    5521 command_runner.go:130] > RuntimeApiVersion:  v1
	I0805 16:37:37.038675    5521 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0805 16:37:37.038749    5521 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:37:37.056467    5521 command_runner.go:130] > 27.1.1
	I0805 16:37:37.057465    5521 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:37:37.074514    5521 command_runner.go:130] > 27.1.1
	I0805 16:37:37.099565    5521 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0805 16:37:37.099612    5521 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:37:37.099970    5521 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0805 16:37:37.104644    5521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:37:37.114271    5521 kubeadm.go:883] updating cluster {Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.15 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 16:37:37.114369    5521 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:37:37.114424    5521 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:37:37.126439    5521 command_runner.go:130] > kindest/kindnetd:v20240730-75a5af0c
	I0805 16:37:37.126453    5521 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0805 16:37:37.126458    5521 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0805 16:37:37.126462    5521 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0805 16:37:37.126465    5521 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0805 16:37:37.126469    5521 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0805 16:37:37.126473    5521 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0805 16:37:37.126477    5521 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0805 16:37:37.126481    5521 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:37:37.126485    5521 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0805 16:37:37.127412    5521 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240730-75a5af0c
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0805 16:37:37.127420    5521 docker.go:615] Images already preloaded, skipping extraction
	I0805 16:37:37.127486    5521 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:37:37.146140    5521 command_runner.go:130] > kindest/kindnetd:v20240730-75a5af0c
	I0805 16:37:37.146154    5521 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0805 16:37:37.146159    5521 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0805 16:37:37.146163    5521 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0805 16:37:37.146167    5521 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0805 16:37:37.146170    5521 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0805 16:37:37.146174    5521 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0805 16:37:37.146179    5521 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0805 16:37:37.146182    5521 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:37:37.146186    5521 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0805 16:37:37.146679    5521 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240730-75a5af0c
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0805 16:37:37.146698    5521 cache_images.go:84] Images are preloaded, skipping loading
	I0805 16:37:37.146707    5521 kubeadm.go:934] updating node { 192.169.0.13 8443 v1.30.3 docker true true} ...
	I0805 16:37:37.146784    5521 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-985000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 16:37:37.146863    5521 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0805 16:37:37.182908    5521 command_runner.go:130] > cgroupfs
	I0805 16:37:37.183498    5521 cni.go:84] Creating CNI manager for ""
	I0805 16:37:37.183509    5521 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0805 16:37:37.183518    5521 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 16:37:37.183536    5521 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.13 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-985000 NodeName:multinode-985000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 16:37:37.183619    5521 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-985000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 16:37:37.183677    5521 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 16:37:37.192063    5521 command_runner.go:130] > kubeadm
	I0805 16:37:37.192073    5521 command_runner.go:130] > kubectl
	I0805 16:37:37.192078    5521 command_runner.go:130] > kubelet
	I0805 16:37:37.192202    5521 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 16:37:37.192247    5521 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 16:37:37.200175    5521 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0805 16:37:37.213737    5521 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 16:37:37.227101    5521 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0805 16:37:37.240845    5521 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I0805 16:37:37.243830    5521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:37:37.253870    5521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:37:37.350271    5521 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:37:37.365726    5521 certs.go:68] Setting up /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000 for IP: 192.169.0.13
	I0805 16:37:37.365744    5521 certs.go:194] generating shared ca certs ...
	I0805 16:37:37.365760    5521 certs.go:226] acquiring lock for ca certs: {Name:mkb83e058d89c7d4e66f4136f377a3c305b13735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:37:37.366000    5521 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key
	I0805 16:37:37.366088    5521 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key
	I0805 16:37:37.366102    5521 certs.go:256] generating profile certs ...
	I0805 16:37:37.366219    5521 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key
	I0805 16:37:37.366302    5521 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec
	I0805 16:37:37.366434    5521 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key
	I0805 16:37:37.366447    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 16:37:37.366477    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 16:37:37.366498    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 16:37:37.366518    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 16:37:37.366537    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 16:37:37.366569    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 16:37:37.366600    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 16:37:37.366630    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 16:37:37.366732    5521 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem (1338 bytes)
	W0805 16:37:37.366808    5521 certs.go:480] ignoring /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678_empty.pem, impossibly tiny 0 bytes
	I0805 16:37:37.366821    5521 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 16:37:37.366859    5521 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem (1082 bytes)
	I0805 16:37:37.366891    5521 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem (1123 bytes)
	I0805 16:37:37.366923    5521 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem (1675 bytes)
	I0805 16:37:37.366996    5521 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:37:37.367034    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:37:37.367064    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem -> /usr/share/ca-certificates/1678.pem
	I0805 16:37:37.367086    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /usr/share/ca-certificates/16782.pem
	I0805 16:37:37.367546    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 16:37:37.395681    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 16:37:37.414513    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 16:37:37.433690    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0805 16:37:37.452500    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0805 16:37:37.472109    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 16:37:37.491753    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 16:37:37.511029    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 16:37:37.530071    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 16:37:37.549206    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem --> /usr/share/ca-certificates/1678.pem (1338 bytes)
	I0805 16:37:37.568348    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /usr/share/ca-certificates/16782.pem (1708 bytes)
	I0805 16:37:37.587345    5521 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 16:37:37.600856    5521 ssh_runner.go:195] Run: openssl version
	I0805 16:37:37.605037    5521 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0805 16:37:37.605082    5521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 16:37:37.614106    5521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:37:37.617312    5521 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:37:37.617414    5521 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:37:37.617448    5521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:37:37.621389    5521 command_runner.go:130] > b5213941
	I0805 16:37:37.621569    5521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 16:37:37.630682    5521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1678.pem && ln -fs /usr/share/ca-certificates/1678.pem /etc/ssl/certs/1678.pem"
	I0805 16:37:37.639868    5521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1678.pem
	I0805 16:37:37.643124    5521 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  5 22:58 /usr/share/ca-certificates/1678.pem
	I0805 16:37:37.643203    5521 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 22:58 /usr/share/ca-certificates/1678.pem
	I0805 16:37:37.643234    5521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1678.pem
	I0805 16:37:37.647330    5521 command_runner.go:130] > 51391683
	I0805 16:37:37.647529    5521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1678.pem /etc/ssl/certs/51391683.0"
	I0805 16:37:37.656868    5521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16782.pem && ln -fs /usr/share/ca-certificates/16782.pem /etc/ssl/certs/16782.pem"
	I0805 16:37:37.665981    5521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16782.pem
	I0805 16:37:37.669370    5521 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  5 22:58 /usr/share/ca-certificates/16782.pem
	I0805 16:37:37.669486    5521 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 22:58 /usr/share/ca-certificates/16782.pem
	I0805 16:37:37.669522    5521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16782.pem
	I0805 16:37:37.673595    5521 command_runner.go:130] > 3ec20f2e
	I0805 16:37:37.673823    5521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16782.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 16:37:37.683082    5521 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 16:37:37.686344    5521 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 16:37:37.686356    5521 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0805 16:37:37.686361    5521 command_runner.go:130] > Device: 253,1	Inode: 3149128     Links: 1
	I0805 16:37:37.686366    5521 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0805 16:37:37.686371    5521 command_runner.go:130] > Access: 2024-08-05 23:20:58.401066212 +0000
	I0805 16:37:37.686375    5521 command_runner.go:130] > Modify: 2024-08-05 23:20:58.401066212 +0000
	I0805 16:37:37.686399    5521 command_runner.go:130] > Change: 2024-08-05 23:20:58.401066212 +0000
	I0805 16:37:37.686409    5521 command_runner.go:130] >  Birth: 2024-08-05 23:20:58.401066212 +0000
	I0805 16:37:37.686482    5521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 16:37:37.690751    5521 command_runner.go:130] > Certificate will not expire
	I0805 16:37:37.690873    5521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 16:37:37.695013    5521 command_runner.go:130] > Certificate will not expire
	I0805 16:37:37.695212    5521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 16:37:37.700369    5521 command_runner.go:130] > Certificate will not expire
	I0805 16:37:37.700476    5521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 16:37:37.704551    5521 command_runner.go:130] > Certificate will not expire
	I0805 16:37:37.704708    5521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 16:37:37.708755    5521 command_runner.go:130] > Certificate will not expire
	I0805 16:37:37.708896    5521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 16:37:37.713109    5521 command_runner.go:130] > Certificate will not expire
	I0805 16:37:37.713257    5521 kubeadm.go:392] StartCluster: {Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.15 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:37:37.713368    5521 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 16:37:37.727282    5521 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 16:37:37.735614    5521 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0805 16:37:37.735623    5521 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0805 16:37:37.735628    5521 command_runner.go:130] > /var/lib/minikube/etcd:
	I0805 16:37:37.735631    5521 command_runner.go:130] > member
	I0805 16:37:37.735761    5521 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 16:37:37.735771    5521 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 16:37:37.735817    5521 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 16:37:37.743915    5521 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 16:37:37.744222    5521 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-985000" does not appear in /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:37:37.744310    5521 kubeconfig.go:62] /Users/jenkins/minikube-integration/19373-1122/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-985000" cluster setting kubeconfig missing "multinode-985000" context setting]
	I0805 16:37:37.744520    5521 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/kubeconfig: {Name:mk2a0d8b4d330b3c26432fc65d015ddf98a9cc93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:37:37.745178    5521 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:37:37.745371    5521 kapi.go:59] client config for multinode-985000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa6d2060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:37:37.745697    5521 cert_rotation.go:137] Starting client certificate rotation controller
	I0805 16:37:37.745867    5521 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 16:37:37.753787    5521 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.13
	I0805 16:37:37.753807    5521 kubeadm.go:1160] stopping kube-system containers ...
	I0805 16:37:37.753864    5521 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 16:37:37.767689    5521 command_runner.go:130] > c9365aec3389
	I0805 16:37:37.767700    5521 command_runner.go:130] > 3d9fd612d0b1
	I0805 16:37:37.767703    5521 command_runner.go:130] > 2a8cd74365e9
	I0805 16:37:37.767706    5521 command_runner.go:130] > 35b9ac42edc0
	I0805 16:37:37.767710    5521 command_runner.go:130] > 724e5cfab0a2
	I0805 16:37:37.767713    5521 command_runner.go:130] > d58ca48f9f8b
	I0805 16:37:37.767717    5521 command_runner.go:130] > 65a1122097f0
	I0805 16:37:37.767720    5521 command_runner.go:130] > c91338eb0e13
	I0805 16:37:37.767729    5521 command_runner.go:130] > 792feba1a6f6
	I0805 16:37:37.767733    5521 command_runner.go:130] > 1fdd85b796ab
	I0805 16:37:37.767739    5521 command_runner.go:130] > d11865076c64
	I0805 16:37:37.767743    5521 command_runner.go:130] > 608878b33f35
	I0805 16:37:37.767746    5521 command_runner.go:130] > c86e04eb7823
	I0805 16:37:37.767749    5521 command_runner.go:130] > 55a20063845e
	I0805 16:37:37.767753    5521 command_runner.go:130] > b58900db5299
	I0805 16:37:37.767756    5521 command_runner.go:130] > 569788c2699f
	I0805 16:37:37.768462    5521 docker.go:483] Stopping containers: [c9365aec3389 3d9fd612d0b1 2a8cd74365e9 35b9ac42edc0 724e5cfab0a2 d58ca48f9f8b 65a1122097f0 c91338eb0e13 792feba1a6f6 1fdd85b796ab d11865076c64 608878b33f35 c86e04eb7823 55a20063845e b58900db5299 569788c2699f]
	I0805 16:37:37.768536    5521 ssh_runner.go:195] Run: docker stop c9365aec3389 3d9fd612d0b1 2a8cd74365e9 35b9ac42edc0 724e5cfab0a2 d58ca48f9f8b 65a1122097f0 c91338eb0e13 792feba1a6f6 1fdd85b796ab d11865076c64 608878b33f35 c86e04eb7823 55a20063845e b58900db5299 569788c2699f
	I0805 16:37:37.780204    5521 command_runner.go:130] > c9365aec3389
	I0805 16:37:37.781733    5521 command_runner.go:130] > 3d9fd612d0b1
	I0805 16:37:37.781870    5521 command_runner.go:130] > 2a8cd74365e9
	I0805 16:37:37.781981    5521 command_runner.go:130] > 35b9ac42edc0
	I0805 16:37:37.782219    5521 command_runner.go:130] > 724e5cfab0a2
	I0805 16:37:37.782404    5521 command_runner.go:130] > d58ca48f9f8b
	I0805 16:37:37.782493    5521 command_runner.go:130] > 65a1122097f0
	I0805 16:37:37.783962    5521 command_runner.go:130] > c91338eb0e13
	I0805 16:37:37.783968    5521 command_runner.go:130] > 792feba1a6f6
	I0805 16:37:37.783972    5521 command_runner.go:130] > 1fdd85b796ab
	I0805 16:37:37.783977    5521 command_runner.go:130] > d11865076c64
	I0805 16:37:37.784750    5521 command_runner.go:130] > 608878b33f35
	I0805 16:37:37.784758    5521 command_runner.go:130] > c86e04eb7823
	I0805 16:37:37.784761    5521 command_runner.go:130] > 55a20063845e
	I0805 16:37:37.784893    5521 command_runner.go:130] > b58900db5299
	I0805 16:37:37.784898    5521 command_runner.go:130] > 569788c2699f
	I0805 16:37:37.785811    5521 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 16:37:37.798972    5521 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 16:37:37.807138    5521 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0805 16:37:37.807150    5521 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0805 16:37:37.807156    5521 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0805 16:37:37.807162    5521 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 16:37:37.807183    5521 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 16:37:37.807189    5521 kubeadm.go:157] found existing configuration files:
	
	I0805 16:37:37.807236    5521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 16:37:37.815004    5521 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 16:37:37.815022    5521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 16:37:37.815068    5521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 16:37:37.823210    5521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 16:37:37.831025    5521 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 16:37:37.831041    5521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 16:37:37.831080    5521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 16:37:37.839362    5521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 16:37:37.847024    5521 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 16:37:37.847043    5521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 16:37:37.847077    5521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 16:37:37.855156    5521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 16:37:37.862975    5521 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 16:37:37.862994    5521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 16:37:37.863026    5521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 16:37:37.871334    5521 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 16:37:37.879543    5521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:37:37.943566    5521 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 16:37:37.943663    5521 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0805 16:37:37.943824    5521 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0805 16:37:37.943956    5521 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 16:37:37.944158    5521 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0805 16:37:37.944374    5521 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0805 16:37:37.944697    5521 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0805 16:37:37.944812    5521 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0805 16:37:37.945011    5521 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0805 16:37:37.945077    5521 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 16:37:37.945285    5521 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 16:37:37.946228    5521 command_runner.go:130] > [certs] Using the existing "sa" key
	I0805 16:37:37.946304    5521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:37:39.167358    5521 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 16:37:39.167371    5521 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 16:37:39.167376    5521 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 16:37:39.167380    5521 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 16:37:39.167385    5521 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 16:37:39.167390    5521 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 16:37:39.167425    5521 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.221104057s)
	I0805 16:37:39.167438    5521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:37:39.219662    5521 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 16:37:39.220354    5521 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 16:37:39.220389    5521 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0805 16:37:39.339247    5521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:37:39.389550    5521 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 16:37:39.389565    5521 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 16:37:39.391233    5521 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 16:37:39.391757    5521 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 16:37:39.393094    5521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:37:39.451609    5521 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 16:37:39.461516    5521 api_server.go:52] waiting for apiserver process to appear ...
	I0805 16:37:39.461580    5521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:37:39.963685    5521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:37:40.462977    5521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:37:40.475006    5521 command_runner.go:130] > 1713
	I0805 16:37:40.475163    5521 api_server.go:72] duration metric: took 1.013654502s to wait for apiserver process to appear ...
	I0805 16:37:40.475173    5521 api_server.go:88] waiting for apiserver healthz status ...
	I0805 16:37:40.475189    5521 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:37:42.515953    5521 api_server.go:279] https://192.169.0.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0805 16:37:42.515968    5521 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0805 16:37:42.515976    5521 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:37:42.561960    5521 api_server.go:279] https://192.169.0.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 16:37:42.561978    5521 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 16:37:42.975764    5521 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:37:42.980706    5521 api_server.go:279] https://192.169.0.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 16:37:42.980725    5521 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 16:37:43.476837    5521 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:37:43.480708    5521 api_server.go:279] https://192.169.0.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 16:37:43.480721    5521 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 16:37:43.976652    5521 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:37:43.982020    5521 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0805 16:37:43.982084    5521 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I0805 16:37:43.982089    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:43.982096    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:43.982100    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:43.991478    5521 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0805 16:37:43.991491    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:43.991496    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:43.991499    5521 round_trippers.go:580]     Content-Length: 263
	I0805 16:37:43.991501    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:44 GMT
	I0805 16:37:43.991503    5521 round_trippers.go:580]     Audit-Id: c8ad866d-278d-4a88-b577-2337c27f176f
	I0805 16:37:43.991506    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:43.991508    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:43.991511    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:43.991536    5521 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0805 16:37:43.991580    5521 api_server.go:141] control plane version: v1.30.3
	I0805 16:37:43.991595    5521 api_server.go:131] duration metric: took 3.5164126s to wait for apiserver health ...
	I0805 16:37:43.991603    5521 cni.go:84] Creating CNI manager for ""
	I0805 16:37:43.991607    5521 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0805 16:37:44.014799    5521 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0805 16:37:44.035887    5521 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0805 16:37:44.053905    5521 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0805 16:37:44.053923    5521 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0805 16:37:44.053930    5521 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0805 16:37:44.053942    5521 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0805 16:37:44.053946    5521 command_runner.go:130] > Access: 2024-08-05 23:37:30.300677873 +0000
	I0805 16:37:44.053950    5521 command_runner.go:130] > Modify: 2024-07-29 16:10:03.000000000 +0000
	I0805 16:37:44.053955    5521 command_runner.go:130] > Change: 2024-08-05 23:37:28.153646920 +0000
	I0805 16:37:44.053958    5521 command_runner.go:130] >  Birth: -
	I0805 16:37:44.054010    5521 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0805 16:37:44.054018    5521 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0805 16:37:44.078089    5521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0805 16:37:44.397453    5521 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0805 16:37:44.418847    5521 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0805 16:37:44.539954    5521 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0805 16:37:44.626597    5521 command_runner.go:130] > daemonset.apps/kindnet configured
	I0805 16:37:44.629867    5521 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 16:37:44.629936    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:37:44.629941    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.629947    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.629953    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.636693    5521 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0805 16:37:44.636713    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.636721    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:44 GMT
	I0805 16:37:44.636727    5521 round_trippers.go:580]     Audit-Id: 06b7f684-2b8a-4634-9922-7ad84cb7e6e5
	I0805 16:37:44.636731    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.636737    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.636741    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.636746    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.638935    5521 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1387"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1383","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 73649 chars]
	I0805 16:37:44.641759    5521 system_pods.go:59] 10 kube-system pods found
	I0805 16:37:44.641784    5521 system_pods.go:61] "coredns-7db6d8ff4d-fqtll" [4d8af129-475b-4185-8b0d-cbda67812964] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0805 16:37:44.641790    5521 system_pods.go:61] "etcd-multinode-985000" [8d7ca2d9-8c7b-41b9-a199-de6449107471] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0805 16:37:44.641795    5521 system_pods.go:61] "kindnet-5kfjr" [d68d8211-58f0-4a8f-904a-c6f9f530d58d] Running
	I0805 16:37:44.641799    5521 system_pods.go:61] "kindnet-tvtvg" [7dd4afe7-2a17-4298-823b-9955e43cfdb2] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0805 16:37:44.641804    5521 system_pods.go:61] "kube-apiserver-multinode-985000" [9be3378a-5fab-4907-baad-507918e714e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0805 16:37:44.641808    5521 system_pods.go:61] "kube-controller-manager-multinode-985000" [4ad64361-65de-4b0b-b2a3-07df18c2e603] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0805 16:37:44.641814    5521 system_pods.go:61] "kube-proxy-fwgw7" [3fb72e39-699d-4123-ae5e-e314a191d904] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0805 16:37:44.641818    5521 system_pods.go:61] "kube-proxy-s65dd" [25cd7fe5-8af2-4869-be11-1eb8c5a7ec01] Running
	I0805 16:37:44.641842    5521 system_pods.go:61] "kube-scheduler-multinode-985000" [5e23b1b7-e45d-4b43-831c-aa835c5e536d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0805 16:37:44.641847    5521 system_pods.go:61] "storage-provisioner" [72ec8458-5c62-43eb-9120-0146e6ccaf8f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0805 16:37:44.641852    5521 system_pods.go:74] duration metric: took 11.975799ms to wait for pod list to return data ...
	I0805 16:37:44.641861    5521 node_conditions.go:102] verifying NodePressure condition ...
	I0805 16:37:44.641901    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0805 16:37:44.641906    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.641911    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.641915    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.647494    5521 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0805 16:37:44.647507    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.647513    5521 round_trippers.go:580]     Audit-Id: 51276e8a-8d41-468a-8372-932c99dbe3e8
	I0805 16:37:44.647516    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.647518    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.647539    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.647544    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.647547    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:44 GMT
	I0805 16:37:44.647674    5521 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1388"},"items":[{"metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10158 chars]
	I0805 16:37:44.648158    5521 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:37:44.648172    5521 node_conditions.go:123] node cpu capacity is 2
	I0805 16:37:44.648182    5521 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:37:44.648186    5521 node_conditions.go:123] node cpu capacity is 2
	I0805 16:37:44.648190    5521 node_conditions.go:105] duration metric: took 6.325811ms to run NodePressure ...
	I0805 16:37:44.648205    5521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:37:44.761435    5521 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0805 16:37:44.914201    5521 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0805 16:37:44.915254    5521 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0805 16:37:44.915318    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0805 16:37:44.915324    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.915331    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.915334    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.917615    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:44.917630    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.917640    5521 round_trippers.go:580]     Audit-Id: 84aaee6c-4475-49f2-8185-30cc2c755e1c
	I0805 16:37:44.917647    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.917651    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.917654    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.917657    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.917660    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:44.918012    5521 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1392"},"items":[{"metadata":{"name":"etcd-multinode-985000","namespace":"kube-system","uid":"8d7ca2d9-8c7b-41b9-a199-de6449107471","resourceVersion":"1380","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.mirror":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.seen":"2024-08-05T23:21:06.366030299Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 30917 chars]
	I0805 16:37:44.918731    5521 kubeadm.go:739] kubelet initialised
	I0805 16:37:44.918740    5521 kubeadm.go:740] duration metric: took 3.47538ms waiting for restarted kubelet to initialise ...
	I0805 16:37:44.918747    5521 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:37:44.918798    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:37:44.918804    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.918810    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.918815    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.920859    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:44.920866    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.920871    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.920873    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.920876    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.920878    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.920880    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:44.920883    5521 round_trippers.go:580]     Audit-Id: 51e54f33-9547-4470-b9ba-c080f1387d56
	I0805 16:37:44.921402    5521 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1392"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1383","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 73056 chars]
	I0805 16:37:44.922957    5521 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:44.922999    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:37:44.923004    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.923008    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.923011    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.924336    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:44.924346    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.924352    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.924355    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:44.924361    5521 round_trippers.go:580]     Audit-Id: e46b48bf-5949-4a1a-88ca-0532f6b9c8c3
	I0805 16:37:44.924364    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.924366    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.924368    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.924440    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1383","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0805 16:37:44.924683    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:44.924690    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.924696    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.924702    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.925980    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:44.925990    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.925998    5521 round_trippers.go:580]     Audit-Id: 28537896-265f-4611-9cfa-95ab32a9f5dc
	I0805 16:37:44.926004    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.926014    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.926018    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.926020    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.926023    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:44.926150    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:44.926329    5521 pod_ready.go:97] node "multinode-985000" hosting pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:44.926339    5521 pod_ready.go:81] duration metric: took 3.373593ms for pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace to be "Ready" ...
	E0805 16:37:44.926345    5521 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-985000" hosting pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:44.926352    5521 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:44.926380    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-985000
	I0805 16:37:44.926385    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.926390    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.926394    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.927346    5521 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:37:44.927354    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.927359    5521 round_trippers.go:580]     Audit-Id: 156a7215-933a-4e99-a1ed-5cbaef6005e2
	I0805 16:37:44.927362    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.927366    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.927371    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.927376    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.927381    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:44.927503    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-985000","namespace":"kube-system","uid":"8d7ca2d9-8c7b-41b9-a199-de6449107471","resourceVersion":"1380","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.mirror":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.seen":"2024-08-05T23:21:06.366030299Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0805 16:37:44.927709    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:44.927716    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.927722    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.927726    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.928738    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:44.928746    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.928753    5521 round_trippers.go:580]     Audit-Id: d454e0d3-91a1-437f-9641-9eb40301fb8f
	I0805 16:37:44.928758    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.928762    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.928767    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.928790    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.928796    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:44.928901    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:44.929068    5521 pod_ready.go:97] node "multinode-985000" hosting pod "etcd-multinode-985000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:44.929083    5521 pod_ready.go:81] duration metric: took 2.726167ms for pod "etcd-multinode-985000" in "kube-system" namespace to be "Ready" ...
	E0805 16:37:44.929089    5521 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-985000" hosting pod "etcd-multinode-985000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:44.929115    5521 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:44.929157    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-985000
	I0805 16:37:44.929163    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.929168    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.929172    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.930121    5521 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:37:44.930130    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.930134    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.930137    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.930139    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:44.930142    5521 round_trippers.go:580]     Audit-Id: 04a0388e-012b-4775-93ee-012b587c4ce5
	I0805 16:37:44.930153    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.930157    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.930304    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-985000","namespace":"kube-system","uid":"9be3378a-5fab-4907-baad-507918e714e4","resourceVersion":"1377","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"5908531d711118eab279d6b15448dc42","kubernetes.io/config.mirror":"5908531d711118eab279d6b15448dc42","kubernetes.io/config.seen":"2024-08-05T23:21:06.366030949Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8136 chars]
	I0805 16:37:44.930549    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:44.930558    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.930562    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.930567    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.931628    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:44.931636    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.931641    5521 round_trippers.go:580]     Audit-Id: 72e9cf52-6af7-45fd-a39e-e10ac17a459d
	I0805 16:37:44.931646    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.931652    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.931657    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.931660    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.931663    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:44.931772    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:44.931949    5521 pod_ready.go:97] node "multinode-985000" hosting pod "kube-apiserver-multinode-985000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:44.931958    5521 pod_ready.go:81] duration metric: took 2.833903ms for pod "kube-apiserver-multinode-985000" in "kube-system" namespace to be "Ready" ...
	E0805 16:37:44.931964    5521 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-985000" hosting pod "kube-apiserver-multinode-985000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:44.931970    5521 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:44.931996    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-985000
	I0805 16:37:44.932000    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.932006    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.932009    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.933363    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:44.933370    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.933375    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.933379    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.933383    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.933389    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.933392    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:44.933395    5521 round_trippers.go:580]     Audit-Id: 993e7085-2a06-4126-8cc5-0d75a41d047f
	I0805 16:37:44.933659    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-985000","namespace":"kube-system","uid":"4ad64361-65de-4b0b-b2a3-07df18c2e603","resourceVersion":"1378","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e41fb21b40cd2f3bd83b000891f6569","kubernetes.io/config.mirror":"8e41fb21b40cd2f3bd83b000891f6569","kubernetes.io/config.seen":"2024-08-05T23:21:06.366027130Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7727 chars]
	I0805 16:37:45.030087    5521 request.go:629] Waited for 96.18446ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:45.030215    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:45.030223    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:45.030234    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:45.030255    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:45.032395    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:45.032407    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:45.032414    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:45.032418    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:45.032423    5521 round_trippers.go:580]     Audit-Id: fd76f05c-aa0d-49d6-bc15-f6320e076edc
	I0805 16:37:45.032426    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:45.032428    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:45.032432    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:45.032710    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:45.032917    5521 pod_ready.go:97] node "multinode-985000" hosting pod "kube-controller-manager-multinode-985000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:45.032927    5521 pod_ready.go:81] duration metric: took 100.952173ms for pod "kube-controller-manager-multinode-985000" in "kube-system" namespace to be "Ready" ...
	E0805 16:37:45.032933    5521 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-985000" hosting pod "kube-controller-manager-multinode-985000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:45.032940    5521 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fwgw7" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:45.231074    5521 request.go:629] Waited for 198.067218ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwgw7
	I0805 16:37:45.231166    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwgw7
	I0805 16:37:45.231251    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:45.231259    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:45.231265    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:45.233956    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:45.233970    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:45.233977    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:45.234001    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:45.234024    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:45.234036    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:45.234040    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:45.234045    5521 round_trippers.go:580]     Audit-Id: a628a40a-acc3-4a40-8f85-01be7202c746
	I0805 16:37:45.234163    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fwgw7","generateName":"kube-proxy-","namespace":"kube-system","uid":"3fb72e39-699d-4123-ae5e-e314a191d904","resourceVersion":"1388","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8b6258e6-7b31-4600-b32b-4a269867c123","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8b6258e6-7b31-4600-b32b-4a269867c123\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0805 16:37:45.430145    5521 request.go:629] Waited for 195.640146ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:45.430221    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:45.430232    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:45.430243    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:45.430253    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:45.432534    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:45.432543    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:45.432549    5521 round_trippers.go:580]     Audit-Id: b3c72e32-7485-434a-9741-e61d4dbf854b
	I0805 16:37:45.432551    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:45.432554    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:45.432557    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:45.432560    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:45.432563    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:45.432975    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:45.433185    5521 pod_ready.go:97] node "multinode-985000" hosting pod "kube-proxy-fwgw7" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:45.433197    5521 pod_ready.go:81] duration metric: took 400.252263ms for pod "kube-proxy-fwgw7" in "kube-system" namespace to be "Ready" ...
	E0805 16:37:45.433203    5521 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-985000" hosting pod "kube-proxy-fwgw7" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:45.433211    5521 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-s65dd" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:45.632072    5521 request.go:629] Waited for 198.802376ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s65dd
	I0805 16:37:45.632244    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s65dd
	I0805 16:37:45.632255    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:45.632266    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:45.632272    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:45.635053    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:45.635075    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:45.635085    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:45.635094    5521 round_trippers.go:580]     Audit-Id: 57426407-9d2e-4f47-a704-559027932b6b
	I0805 16:37:45.635098    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:45.635145    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:45.635163    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:45.635171    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:45.635354    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s65dd","generateName":"kube-proxy-","namespace":"kube-system","uid":"25cd7fe5-8af2-4869-be11-1eb8c5a7ec01","resourceVersion":"1280","creationTimestamp":"2024-08-05T23:34:49Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8b6258e6-7b31-4600-b32b-4a269867c123","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:34:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8b6258e6-7b31-4600-b32b-4a269867c123\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0805 16:37:45.831233    5521 request.go:629] Waited for 195.519063ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000-m03
	I0805 16:37:45.831411    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000-m03
	I0805 16:37:45.831422    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:45.831433    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:45.831439    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:45.834136    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:45.834155    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:45.834163    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:45.834183    5521 round_trippers.go:580]     Audit-Id: 27e71a24-1a24-4f27-b263-1184e4e136ef
	I0805 16:37:45.834194    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:45.834220    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:45.834227    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:45.834231    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:45.834346    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000-m03","uid":"9699bc94-d62c-4219-9310-93c890f4d182","resourceVersion":"1310","creationTimestamp":"2024-08-05T23:35:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_05T16_35_55_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:35:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3811 chars]
	I0805 16:37:45.834594    5521 pod_ready.go:92] pod "kube-proxy-s65dd" in "kube-system" namespace has status "Ready":"True"
	I0805 16:37:45.834607    5521 pod_ready.go:81] duration metric: took 401.389356ms for pod "kube-proxy-s65dd" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:45.834615    5521 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:46.030012    5521 request.go:629] Waited for 195.347838ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-985000
	I0805 16:37:46.030118    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-985000
	I0805 16:37:46.030282    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:46.030295    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:46.030302    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:46.033255    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:46.033269    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:46.033277    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:46.033282    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:46 GMT
	I0805 16:37:46.033295    5521 round_trippers.go:580]     Audit-Id: 5581d0b0-634a-4879-93db-f12183f9c6d1
	I0805 16:37:46.033299    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:46.033303    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:46.033307    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:46.033383    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-985000","namespace":"kube-system","uid":"5e23b1b7-e45d-4b43-831c-aa835c5e536d","resourceVersion":"1379","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d110ae14602908970c81c0d8a5c21147","kubernetes.io/config.mirror":"d110ae14602908970c81c0d8a5c21147","kubernetes.io/config.seen":"2024-08-05T23:21:06.366029633Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5439 chars]
	I0805 16:37:46.231588    5521 request.go:629] Waited for 197.896286ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:46.231711    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:46.231722    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:46.231734    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:46.231741    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:46.234296    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:46.234309    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:46.234327    5521 round_trippers.go:580]     Audit-Id: ed41d168-df4f-4577-a59b-11a4695f1e4d
	I0805 16:37:46.234334    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:46.234344    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:46.234348    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:46.234352    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:46.234357    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:46 GMT
	I0805 16:37:46.234726    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:46.235010    5521 pod_ready.go:97] node "multinode-985000" hosting pod "kube-scheduler-multinode-985000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:46.235036    5521 pod_ready.go:81] duration metric: took 400.401386ms for pod "kube-scheduler-multinode-985000" in "kube-system" namespace to be "Ready" ...
	E0805 16:37:46.235046    5521 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-985000" hosting pod "kube-scheduler-multinode-985000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:46.235053    5521 pod_ready.go:38] duration metric: took 1.316290856s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:37:46.235072    5521 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 16:37:46.244782    5521 command_runner.go:130] > -16
	I0805 16:37:46.244799    5521 ops.go:34] apiserver oom_adj: -16
	I0805 16:37:46.244803    5521 kubeadm.go:597] duration metric: took 8.509016692s to restartPrimaryControlPlane
	I0805 16:37:46.244808    5521 kubeadm.go:394] duration metric: took 8.531546295s to StartCluster
	I0805 16:37:46.244817    5521 settings.go:142] acquiring lock: {Name:mk564a817a54ecf2aef16a4d2309e85208c0231f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:37:46.244907    5521 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:37:46.245297    5521 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/kubeconfig: {Name:mk2a0d8b4d330b3c26432fc65d015ddf98a9cc93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:37:46.245581    5521 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:37:46.245620    5521 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 16:37:46.245737    5521 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:37:46.265883    5521 out.go:177] * Verifying Kubernetes components...
	I0805 16:37:46.287681    5521 out.go:177] * Enabled addons: 
	I0805 16:37:46.308720    5521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:37:46.329784    5521 addons.go:510] duration metric: took 84.170663ms for enable addons: enabled=[]
	I0805 16:37:46.445431    5521 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:37:46.455908    5521 node_ready.go:35] waiting up to 6m0s for node "multinode-985000" to be "Ready" ...
	I0805 16:37:46.455963    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:46.455968    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:46.455974    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:46.455977    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:46.457387    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:46.457397    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:46.457405    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:46.457409    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:46 GMT
	I0805 16:37:46.457413    5521 round_trippers.go:580]     Audit-Id: bd4eda68-4863-49e7-bbfb-7ea21cb5ada5
	I0805 16:37:46.457415    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:46.457419    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:46.457421    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:46.457522    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:46.956358    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:46.956384    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:46.956396    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:46.956402    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:46.958818    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:46.958832    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:46.958842    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:46.958847    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:47 GMT
	I0805 16:37:46.958852    5521 round_trippers.go:580]     Audit-Id: b4463266-7add-4cc7-bedc-006651384d80
	I0805 16:37:46.958856    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:46.958860    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:46.958865    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:46.959158    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:47.456173    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:47.456189    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:47.456196    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:47.456199    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:47.457836    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:47.457847    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:47.457853    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:47 GMT
	I0805 16:37:47.457855    5521 round_trippers.go:580]     Audit-Id: b5690d8d-ba4d-4e8f-b3e4-326d910d1169
	I0805 16:37:47.457859    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:47.457863    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:47.457865    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:47.457868    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:47.458059    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:47.957596    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:47.957622    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:47.957635    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:47.957747    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:47.960401    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:47.960416    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:47.960423    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:48 GMT
	I0805 16:37:47.960427    5521 round_trippers.go:580]     Audit-Id: 02db3cf8-0261-4eb0-999f-e3bddfad9106
	I0805 16:37:47.960432    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:47.960436    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:47.960442    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:47.960446    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:47.960593    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:48.456064    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:48.456080    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:48.456087    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:48.456090    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:48.457742    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:48.457753    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:48.457758    5521 round_trippers.go:580]     Audit-Id: 70dbc308-f0bd-455d-8c1c-5afbe89a93d9
	I0805 16:37:48.457762    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:48.457764    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:48.457768    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:48.457772    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:48.457775    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:48 GMT
	I0805 16:37:48.457993    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:48.458188    5521 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:37:48.956783    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:48.956808    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:48.956843    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:48.956864    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:48.959167    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:48.959183    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:48.959193    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:48.959202    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:48.959208    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:48.959213    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:49 GMT
	I0805 16:37:48.959218    5521 round_trippers.go:580]     Audit-Id: 8fc7039f-2874-4170-a425-4689f2a4108b
	I0805 16:37:48.959223    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:48.959444    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:49.456474    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:49.456499    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:49.456511    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:49.456519    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:49.460713    5521 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 16:37:49.460739    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:49.460750    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:49.460761    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:49.460768    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:49 GMT
	I0805 16:37:49.460771    5521 round_trippers.go:580]     Audit-Id: ca04ca0c-3f72-4aff-8e7b-301f719bcbfc
	I0805 16:37:49.460775    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:49.460779    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:49.460857    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:49.957699    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:49.957728    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:49.957740    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:49.957835    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:49.960680    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:49.960698    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:49.960708    5521 round_trippers.go:580]     Audit-Id: 2de612c8-6d27-4ce3-b54a-c8ff3a4a639d
	I0805 16:37:49.960714    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:49.960722    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:49.960727    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:49.960734    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:49.960740    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:50 GMT
	I0805 16:37:49.960897    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:50.457100    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:50.457129    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:50.457142    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:50.457153    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:50.459627    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:50.459642    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:50.459649    5521 round_trippers.go:580]     Audit-Id: fafeb1d7-a055-47c0-988a-6b38c5651dfc
	I0805 16:37:50.459655    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:50.459660    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:50.459663    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:50.459666    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:50.459676    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:50 GMT
	I0805 16:37:50.459741    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:50.459999    5521 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:37:50.956078    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:50.956154    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:50.956163    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:50.956169    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:50.958070    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:50.958082    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:50.958087    5521 round_trippers.go:580]     Audit-Id: 87aa82fe-18d5-4cce-85d4-59e61ce26f17
	I0805 16:37:50.958091    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:50.958094    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:50.958097    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:50.958100    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:50.958102    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:51 GMT
	I0805 16:37:50.958160    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:51.457531    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:51.457557    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:51.457653    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:51.457663    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:51.460369    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:51.460384    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:51.460391    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:51.460396    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:51.460400    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:51.460404    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:51 GMT
	I0805 16:37:51.460431    5521 round_trippers.go:580]     Audit-Id: 9466c051-32fc-4ea5-bd73-ed0e7f687b57
	I0805 16:37:51.460450    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:51.460881    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:51.958224    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:51.958246    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:51.958258    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:51.958263    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:51.960788    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:51.960803    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:51.960811    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:51.960816    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:52 GMT
	I0805 16:37:51.960821    5521 round_trippers.go:580]     Audit-Id: af328a60-8cdc-4dd9-8f48-0c8f8247a6e1
	I0805 16:37:51.960827    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:51.960833    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:51.960836    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:51.960936    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:52.457362    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:52.457389    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:52.457401    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:52.457409    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:52.460067    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:52.460081    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:52.460088    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:52.460093    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:52.460097    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:52.460101    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:52.460104    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:52 GMT
	I0805 16:37:52.460107    5521 round_trippers.go:580]     Audit-Id: 7e825e88-a0c3-4ec8-9784-79cc2ced397e
	I0805 16:37:52.460238    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:52.460481    5521 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:37:52.956862    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:52.956888    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:52.956900    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:52.956906    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:52.959190    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:52.959207    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:52.959222    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:52.959230    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:52.959236    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:52.959241    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:53 GMT
	I0805 16:37:52.959245    5521 round_trippers.go:580]     Audit-Id: 1a9c796b-7598-4e9f-984e-7d71ef0ecc6b
	I0805 16:37:52.959248    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:52.959484    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:53.456240    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:53.456260    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:53.456268    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:53.456272    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:53.458257    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:53.458266    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:53.458272    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:53.458274    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:53 GMT
	I0805 16:37:53.458279    5521 round_trippers.go:580]     Audit-Id: 624a2604-a974-4849-aae7-2e1a5658d567
	I0805 16:37:53.458282    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:53.458287    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:53.458289    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:53.458511    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:53.957417    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:53.957442    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:53.957454    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:53.957460    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:53.960056    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:53.960069    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:53.960076    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:53.960080    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:53.960084    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:53.960088    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:53.960092    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:54 GMT
	I0805 16:37:53.960096    5521 round_trippers.go:580]     Audit-Id: 4faec3b3-a538-4ac5-b5df-a77a30b26579
	I0805 16:37:53.960283    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:54.456804    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:54.456830    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:54.456842    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:54.456850    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:54.459440    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:54.459455    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:54.459462    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:54.459467    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:54 GMT
	I0805 16:37:54.459471    5521 round_trippers.go:580]     Audit-Id: c4315559-7c37-420d-be82-f17839e46d45
	I0805 16:37:54.459475    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:54.459478    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:54.459483    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:54.459541    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:54.957878    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:54.957940    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:54.957948    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:54.957954    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:54.959305    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:54.959315    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:54.959320    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:54.959323    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:55 GMT
	I0805 16:37:54.959326    5521 round_trippers.go:580]     Audit-Id: b65ad43a-738a-45c5-8d88-879d1015f894
	I0805 16:37:54.959328    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:54.959331    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:54.959334    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:54.959389    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1479","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5422 chars]
	I0805 16:37:54.959586    5521 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:37:55.456090    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:55.456116    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:55.456128    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:55.456169    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:55.458752    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:55.458766    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:55.458773    5521 round_trippers.go:580]     Audit-Id: 616d546e-47b3-4c39-a1cf-a7bc7ca58bf7
	I0805 16:37:55.458777    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:55.458782    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:55.458785    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:55.458790    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:55.458793    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:55 GMT
	I0805 16:37:55.459013    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1493","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0805 16:37:55.956768    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:55.956795    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:55.956807    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:55.956815    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:55.959573    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:55.959589    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:55.959598    5521 round_trippers.go:580]     Audit-Id: a21b3b8d-1df5-4728-80b8-f92ed173fb09
	I0805 16:37:55.959602    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:55.959606    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:55.959611    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:55.959615    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:55.959619    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:56 GMT
	I0805 16:37:55.959715    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1493","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0805 16:37:56.456636    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:56.456739    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:56.456753    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:56.456759    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:56.458839    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:56.458851    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:56.458859    5521 round_trippers.go:580]     Audit-Id: b8671d44-80ca-458b-b1a7-50f5ad978f8f
	I0805 16:37:56.458864    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:56.458870    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:56.458874    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:56.458878    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:56.458881    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:56 GMT
	I0805 16:37:56.458982    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1493","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0805 16:37:56.956321    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:56.956347    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:56.956363    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:56.956372    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:56.958919    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:56.958932    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:56.958939    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:56.958944    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:56.958948    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:57 GMT
	I0805 16:37:56.958952    5521 round_trippers.go:580]     Audit-Id: 4f4bb43a-a081-437b-8ed2-cbdb66346756
	I0805 16:37:56.958958    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:56.958961    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:56.959161    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1493","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0805 16:37:57.456800    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:57.456815    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:57.456821    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:57.456825    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:57.458252    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:57.458262    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:57.458266    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:57.458270    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:57 GMT
	I0805 16:37:57.458273    5521 round_trippers.go:580]     Audit-Id: f407e253-302d-4f95-b5a4-ba92b556127a
	I0805 16:37:57.458276    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:57.458278    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:57.458281    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:57.458508    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:57.458703    5521 node_ready.go:49] node "multinode-985000" has status "Ready":"True"
	I0805 16:37:57.458716    5521 node_ready.go:38] duration metric: took 11.002775889s for node "multinode-985000" to be "Ready" ...
	I0805 16:37:57.458723    5521 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:37:57.458755    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:37:57.458761    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:57.458766    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:57.458770    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:57.462079    5521 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:37:57.462091    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:57.462096    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:57.462099    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:57.462102    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:57.462105    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:57.462107    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:57 GMT
	I0805 16:37:57.462111    5521 round_trippers.go:580]     Audit-Id: c20c94e3-f664-43bb-99a2-b2fb3d7f9976
	I0805 16:37:57.463098    5521 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1502"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1383","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 72982 chars]
	I0805 16:37:57.464719    5521 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:57.464766    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:37:57.464771    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:57.464777    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:57.464781    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:57.468609    5521 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:37:57.468622    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:57.468660    5521 round_trippers.go:580]     Audit-Id: 9de6faa5-7a31-44a9-83bf-9ebccfd4a34c
	I0805 16:37:57.468668    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:57.468673    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:57.468677    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:57.468680    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:57.468683    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:57 GMT
	I0805 16:37:57.468940    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1383","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0805 16:37:57.469229    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:57.469236    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:57.469242    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:57.469246    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:57.472498    5521 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:37:57.472509    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:57.472515    5521 round_trippers.go:580]     Audit-Id: 4ff61667-289e-4440-93e2-be7d6d55b721
	I0805 16:37:57.472519    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:57.472522    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:57.472525    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:57.472529    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:57.472531    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:57 GMT
	I0805 16:37:57.472719    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:57.966220    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:37:57.966278    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:57.966296    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:57.966304    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:57.969173    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:57.969187    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:57.969194    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:57.969198    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:57.969202    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:57.969206    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:57.969210    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:58 GMT
	I0805 16:37:57.969214    5521 round_trippers.go:580]     Audit-Id: 9d8c78fc-82fd-4791-b979-ae013d775a53
	I0805 16:37:57.969286    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1383","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0805 16:37:57.969645    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:57.969655    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:57.969662    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:57.969665    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:57.971024    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:57.971035    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:57.971043    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:57.971057    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:57.971067    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:57.971072    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:58 GMT
	I0805 16:37:57.971078    5521 round_trippers.go:580]     Audit-Id: 1384bca3-9b68-4402-b310-399209a4314b
	I0805 16:37:57.971085    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:57.971227    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:58.465939    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:37:58.465967    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:58.465978    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:58.465984    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:58.468758    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:58.468774    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:58.468781    5521 round_trippers.go:580]     Audit-Id: 72df3ada-da8b-4478-8394-8e4440f54d0d
	I0805 16:37:58.468786    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:58.468790    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:58.468794    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:58.468797    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:58.468800    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:58 GMT
	I0805 16:37:58.469261    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1383","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0805 16:37:58.469660    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:58.469669    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:58.469678    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:58.469683    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:58.471092    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:58.471100    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:58.471106    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:58.471110    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:58.471113    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:58 GMT
	I0805 16:37:58.471116    5521 round_trippers.go:580]     Audit-Id: 422803bf-9df2-457f-baab-402da408f3ef
	I0805 16:37:58.471118    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:58.471121    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:58.471275    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:58.966614    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:37:58.966630    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:58.966638    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:58.966643    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:58.968744    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:58.968756    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:58.968764    5521 round_trippers.go:580]     Audit-Id: 3e47d6ce-e3a9-4db9-9176-cf25942d89b9
	I0805 16:37:58.968769    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:58.968773    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:58.968777    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:58.968779    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:58.968782    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:59 GMT
	I0805 16:37:58.969124    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1383","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0805 16:37:58.969515    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:58.969537    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:58.969561    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:58.969565    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:58.970905    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:58.970913    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:58.970918    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:58.970927    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:58.970932    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:58.970935    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:58.970938    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:59 GMT
	I0805 16:37:58.970940    5521 round_trippers.go:580]     Audit-Id: f5155c70-9046-4427-944c-248d4543ab46
	I0805 16:37:58.971032    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:59.465508    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:37:59.465521    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.465527    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.465530    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.468891    5521 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:37:59.468903    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.468908    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:59 GMT
	I0805 16:37:59.468912    5521 round_trippers.go:580]     Audit-Id: 04ed6578-9810-4fac-bbc6-2e95106ea7a2
	I0805 16:37:59.468914    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.468917    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.468920    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.468922    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.469308    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1383","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0805 16:37:59.469589    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:59.469595    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.469601    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.469604    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.471279    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:59.471287    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.471293    5521 round_trippers.go:580]     Audit-Id: 9ef82004-a4d2-4da7-8c13-f62c040183d9
	I0805 16:37:59.471296    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.471299    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.471301    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.471303    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.471306    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:59 GMT
	I0805 16:37:59.471417    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:59.471592    5521 pod_ready.go:102] pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace has status "Ready":"False"
	I0805 16:37:59.965187    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:37:59.965206    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.965218    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.965223    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.967501    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:59.967516    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.967523    5521 round_trippers.go:580]     Audit-Id: 6aa85007-6ee0-4657-8e54-a4bb9dfb34ac
	I0805 16:37:59.967528    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.967548    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.967555    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.967559    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.967563    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:37:59.967804    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1520","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6784 chars]
	I0805 16:37:59.968187    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:59.968194    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.968200    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.968203    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.969359    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:59.969366    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.969373    5521 round_trippers.go:580]     Audit-Id: 47ab49d3-f2d9-42b4-9106-89187d49ce44
	I0805 16:37:59.969376    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.969378    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.969382    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.969385    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.969389    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:37:59.969574    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:59.969740    5521 pod_ready.go:92] pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace has status "Ready":"True"
	I0805 16:37:59.969749    5521 pod_ready.go:81] duration metric: took 2.505012595s for pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:59.969756    5521 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:59.969784    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-985000
	I0805 16:37:59.969788    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.969793    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.969797    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.970714    5521 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:37:59.970723    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.970728    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.970731    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.970733    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.970736    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.970738    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:37:59.970740    5521 round_trippers.go:580]     Audit-Id: e43ae6e7-5ed0-48b6-a0a7-dfb77e057ed0
	I0805 16:37:59.970919    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-985000","namespace":"kube-system","uid":"8d7ca2d9-8c7b-41b9-a199-de6449107471","resourceVersion":"1506","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.mirror":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.seen":"2024-08-05T23:21:06.366030299Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6358 chars]
	I0805 16:37:59.971134    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:59.971141    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.971147    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.971150    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.972128    5521 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:37:59.972141    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.972148    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.972154    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.972158    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.972160    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:37:59.972163    5521 round_trippers.go:580]     Audit-Id: 5b17c3dc-a0a2-4c0d-aa7a-8999b87e3e64
	I0805 16:37:59.972187    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.972281    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:59.972443    5521 pod_ready.go:92] pod "etcd-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:37:59.972450    5521 pod_ready.go:81] duration metric: took 2.690084ms for pod "etcd-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:59.972459    5521 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:59.972487    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-985000
	I0805 16:37:59.972492    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.972497    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.972500    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.973486    5521 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:37:59.973494    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.973499    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.973504    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:37:59.973508    5521 round_trippers.go:580]     Audit-Id: 5bcb7226-eda8-4823-8b5c-25d9a2496fe7
	I0805 16:37:59.973514    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.973518    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.973522    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.973687    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-985000","namespace":"kube-system","uid":"9be3378a-5fab-4907-baad-507918e714e4","resourceVersion":"1498","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"5908531d711118eab279d6b15448dc42","kubernetes.io/config.mirror":"5908531d711118eab279d6b15448dc42","kubernetes.io/config.seen":"2024-08-05T23:21:06.366030949Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7892 chars]
	I0805 16:37:59.973925    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:59.973931    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.973937    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.973941    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.974960    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:59.974978    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.974986    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.974990    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.974993    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.974996    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:37:59.975000    5521 round_trippers.go:580]     Audit-Id: 9e7c3601-1b94-462b-97ec-1a8afab1df7f
	I0805 16:37:59.975003    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.975129    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:59.975296    5521 pod_ready.go:92] pod "kube-apiserver-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:37:59.975303    5521 pod_ready.go:81] duration metric: took 2.839851ms for pod "kube-apiserver-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:59.975309    5521 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:59.975339    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-985000
	I0805 16:37:59.975343    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.975349    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.975352    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.976422    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:59.976452    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.976458    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.976467    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.976470    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:37:59.976472    5521 round_trippers.go:580]     Audit-Id: 512682ae-f4a9-4641-903b-89cfe7630d58
	I0805 16:37:59.976476    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.976478    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.976584    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-985000","namespace":"kube-system","uid":"4ad64361-65de-4b0b-b2a3-07df18c2e603","resourceVersion":"1494","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e41fb21b40cd2f3bd83b000891f6569","kubernetes.io/config.mirror":"8e41fb21b40cd2f3bd83b000891f6569","kubernetes.io/config.seen":"2024-08-05T23:21:06.366027130Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7465 chars]
	I0805 16:37:59.976808    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:59.976815    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.976820    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.976824    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.977900    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:59.977908    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.977912    5521 round_trippers.go:580]     Audit-Id: 09ba5c21-e357-4918-93b4-ff1a00ece334
	I0805 16:37:59.977916    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.977919    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.977922    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.977925    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.977928    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:37:59.978095    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:59.978252    5521 pod_ready.go:92] pod "kube-controller-manager-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:37:59.978260    5521 pod_ready.go:81] duration metric: took 2.945375ms for pod "kube-controller-manager-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:59.978267    5521 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fwgw7" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:59.978292    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwgw7
	I0805 16:37:59.978297    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.978313    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.978320    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.979354    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:59.979360    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.979364    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:37:59.979367    5521 round_trippers.go:580]     Audit-Id: d6e77621-e9d2-486b-8cc4-49ab45a5f053
	I0805 16:37:59.979373    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.979378    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.979382    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.979386    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.979584    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fwgw7","generateName":"kube-proxy-","namespace":"kube-system","uid":"3fb72e39-699d-4123-ae5e-e314a191d904","resourceVersion":"1509","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8b6258e6-7b31-4600-b32b-4a269867c123","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8b6258e6-7b31-4600-b32b-4a269867c123\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0805 16:37:59.979798    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:59.979805    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.979810    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.979815    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.980814    5521 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:37:59.980822    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.980829    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.980835    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.980839    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:37:59.980842    5521 round_trippers.go:580]     Audit-Id: bf9dc5db-49ef-4e93-a9ad-d8ea6d952b22
	I0805 16:37:59.980845    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.980847    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.980963    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:59.981119    5521 pod_ready.go:92] pod "kube-proxy-fwgw7" in "kube-system" namespace has status "Ready":"True"
	I0805 16:37:59.981126    5521 pod_ready.go:81] duration metric: took 2.853579ms for pod "kube-proxy-fwgw7" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:59.981131    5521 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s65dd" in "kube-system" namespace to be "Ready" ...
	I0805 16:38:00.165697    5521 request.go:629] Waited for 184.4763ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s65dd
	I0805 16:38:00.165754    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s65dd
	I0805 16:38:00.165763    5521 round_trippers.go:469] Request Headers:
	I0805 16:38:00.165776    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:38:00.165784    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:38:00.168520    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:38:00.168535    5521 round_trippers.go:577] Response Headers:
	I0805 16:38:00.168543    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:38:00.168547    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:38:00.168552    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:38:00.168556    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:38:00.168559    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:38:00.168564    5521 round_trippers.go:580]     Audit-Id: cb996198-c69f-41f3-9883-c0b1d86c0ef8
	I0805 16:38:00.168681    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s65dd","generateName":"kube-proxy-","namespace":"kube-system","uid":"25cd7fe5-8af2-4869-be11-1eb8c5a7ec01","resourceVersion":"1280","creationTimestamp":"2024-08-05T23:34:49Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8b6258e6-7b31-4600-b32b-4a269867c123","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:34:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8b6258e6-7b31-4600-b32b-4a269867c123\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0805 16:38:00.366684    5521 request.go:629] Waited for 197.656042ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000-m03
	I0805 16:38:00.366816    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000-m03
	I0805 16:38:00.366827    5521 round_trippers.go:469] Request Headers:
	I0805 16:38:00.366839    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:38:00.366845    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:38:00.369434    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:38:00.369449    5521 round_trippers.go:577] Response Headers:
	I0805 16:38:00.369456    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:38:00.369461    5521 round_trippers.go:580]     Audit-Id: 8a485a3a-116c-4fd2-986e-0f95c466f2b6
	I0805 16:38:00.369464    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:38:00.369468    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:38:00.369472    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:38:00.369491    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:38:00.369671    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000-m03","uid":"9699bc94-d62c-4219-9310-93c890f4d182","resourceVersion":"1310","creationTimestamp":"2024-08-05T23:35:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_05T16_35_55_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:35:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3811 chars]
	I0805 16:38:00.369888    5521 pod_ready.go:92] pod "kube-proxy-s65dd" in "kube-system" namespace has status "Ready":"True"
	I0805 16:38:00.369900    5521 pod_ready.go:81] duration metric: took 388.763276ms for pod "kube-proxy-s65dd" in "kube-system" namespace to be "Ready" ...
	I0805 16:38:00.369909    5521 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:38:00.565911    5521 request.go:629] Waited for 195.966473ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-985000
	I0805 16:38:00.566005    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-985000
	I0805 16:38:00.566010    5521 round_trippers.go:469] Request Headers:
	I0805 16:38:00.566016    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:38:00.566021    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:38:00.567727    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:38:00.567736    5521 round_trippers.go:577] Response Headers:
	I0805 16:38:00.567741    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:38:00.567744    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:38:00.567746    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:38:00.567750    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:38:00.567753    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:38:00.567756    5521 round_trippers.go:580]     Audit-Id: e82326e5-6b6c-4bbe-9e4b-0ddab6f947e6
	I0805 16:38:00.567921    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-985000","namespace":"kube-system","uid":"5e23b1b7-e45d-4b43-831c-aa835c5e536d","resourceVersion":"1502","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d110ae14602908970c81c0d8a5c21147","kubernetes.io/config.mirror":"d110ae14602908970c81c0d8a5c21147","kubernetes.io/config.seen":"2024-08-05T23:21:06.366029633Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5195 chars]
	I0805 16:38:00.765952    5521 request.go:629] Waited for 197.798951ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:38:00.766012    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:38:00.766024    5521 round_trippers.go:469] Request Headers:
	I0805 16:38:00.766035    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:38:00.766043    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:38:00.768641    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:38:00.768656    5521 round_trippers.go:577] Response Headers:
	I0805 16:38:00.768663    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:38:00.768668    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:38:00.768672    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:38:00.768679    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:38:00.768686    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:38:00.768690    5521 round_trippers.go:580]     Audit-Id: 185ed8df-c8cf-4ff7-8566-ce38bafe88b6
	I0805 16:38:00.768965    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1525","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0805 16:38:00.769214    5521 pod_ready.go:92] pod "kube-scheduler-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:38:00.769227    5521 pod_ready.go:81] duration metric: took 399.310045ms for pod "kube-scheduler-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:38:00.769236    5521 pod_ready.go:38] duration metric: took 3.310501987s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:38:00.769251    5521 api_server.go:52] waiting for apiserver process to appear ...
	I0805 16:38:00.769314    5521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:38:00.780856    5521 command_runner.go:130] > 1713
	I0805 16:38:00.780992    5521 api_server.go:72] duration metric: took 14.535377095s to wait for apiserver process to appear ...
	I0805 16:38:00.781000    5521 api_server.go:88] waiting for apiserver healthz status ...
	I0805 16:38:00.781009    5521 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:38:00.784000    5521 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0805 16:38:00.784029    5521 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I0805 16:38:00.784034    5521 round_trippers.go:469] Request Headers:
	I0805 16:38:00.784041    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:38:00.784045    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:38:00.784553    5521 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:38:00.784561    5521 round_trippers.go:577] Response Headers:
	I0805 16:38:00.784567    5521 round_trippers.go:580]     Content-Length: 263
	I0805 16:38:00.784570    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:38:00.784572    5521 round_trippers.go:580]     Audit-Id: 5f0639a4-edd4-4f06-9ffe-bc3569a1e001
	I0805 16:38:00.784575    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:38:00.784578    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:38:00.784582    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:38:00.784584    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:38:00.784592    5521 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0805 16:38:00.784614    5521 api_server.go:141] control plane version: v1.30.3
	I0805 16:38:00.784621    5521 api_server.go:131] duration metric: took 3.617958ms to wait for apiserver health ...
	I0805 16:38:00.784627    5521 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 16:38:00.965403    5521 request.go:629] Waited for 180.737038ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:38:00.965497    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:38:00.965511    5521 round_trippers.go:469] Request Headers:
	I0805 16:38:00.965523    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:38:00.965530    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:38:00.969409    5521 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:38:00.969427    5521 round_trippers.go:577] Response Headers:
	I0805 16:38:00.969435    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:38:00.969440    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:38:00.969467    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:38:00.969482    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:01 GMT
	I0805 16:38:00.969489    5521 round_trippers.go:580]     Audit-Id: 9df3ad2c-a16e-4582-8dab-0552f9f48e75
	I0805 16:38:00.969493    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:38:00.970371    5521 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1531"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1520","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 72029 chars]
	I0805 16:38:00.971896    5521 system_pods.go:59] 10 kube-system pods found
	I0805 16:38:00.971906    5521 system_pods.go:61] "coredns-7db6d8ff4d-fqtll" [4d8af129-475b-4185-8b0d-cbda67812964] Running
	I0805 16:38:00.971910    5521 system_pods.go:61] "etcd-multinode-985000" [8d7ca2d9-8c7b-41b9-a199-de6449107471] Running
	I0805 16:38:00.971912    5521 system_pods.go:61] "kindnet-5kfjr" [d68d8211-58f0-4a8f-904a-c6f9f530d58d] Running
	I0805 16:38:00.971915    5521 system_pods.go:61] "kindnet-tvtvg" [7dd4afe7-2a17-4298-823b-9955e43cfdb2] Running
	I0805 16:38:00.971917    5521 system_pods.go:61] "kube-apiserver-multinode-985000" [9be3378a-5fab-4907-baad-507918e714e4] Running
	I0805 16:38:00.971920    5521 system_pods.go:61] "kube-controller-manager-multinode-985000" [4ad64361-65de-4b0b-b2a3-07df18c2e603] Running
	I0805 16:38:00.971923    5521 system_pods.go:61] "kube-proxy-fwgw7" [3fb72e39-699d-4123-ae5e-e314a191d904] Running
	I0805 16:38:00.971926    5521 system_pods.go:61] "kube-proxy-s65dd" [25cd7fe5-8af2-4869-be11-1eb8c5a7ec01] Running
	I0805 16:38:00.971929    5521 system_pods.go:61] "kube-scheduler-multinode-985000" [5e23b1b7-e45d-4b43-831c-aa835c5e536d] Running
	I0805 16:38:00.971931    5521 system_pods.go:61] "storage-provisioner" [72ec8458-5c62-43eb-9120-0146e6ccaf8f] Running
	I0805 16:38:00.971935    5521 system_pods.go:74] duration metric: took 187.304764ms to wait for pod list to return data ...
	I0805 16:38:00.971941    5521 default_sa.go:34] waiting for default service account to be created ...
	I0805 16:38:01.166632    5521 request.go:629] Waited for 194.612281ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:38:01.166685    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:38:01.166696    5521 round_trippers.go:469] Request Headers:
	I0805 16:38:01.166710    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:38:01.166717    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:38:01.169824    5521 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:38:01.169846    5521 round_trippers.go:577] Response Headers:
	I0805 16:38:01.169857    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:38:01.169864    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:38:01.169869    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:38:01.169872    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:38:01.169875    5521 round_trippers.go:580]     Content-Length: 262
	I0805 16:38:01.169881    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:01 GMT
	I0805 16:38:01.169885    5521 round_trippers.go:580]     Audit-Id: 596b84b0-d5e1-453f-9c6b-48a083c0f9d5
	I0805 16:38:01.169899    5521 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1531"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b0626468-f73b-4e9b-8270-658495d43f4a","resourceVersion":"337","creationTimestamp":"2024-08-05T23:21:19Z"}}]}
	I0805 16:38:01.170038    5521 default_sa.go:45] found service account: "default"
	I0805 16:38:01.170050    5521 default_sa.go:55] duration metric: took 198.104201ms for default service account to be created ...
	I0805 16:38:01.170061    5521 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 16:38:01.365509    5521 request.go:629] Waited for 195.385608ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:38:01.365661    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:38:01.365673    5521 round_trippers.go:469] Request Headers:
	I0805 16:38:01.365684    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:38:01.365691    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:38:01.369380    5521 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:38:01.369395    5521 round_trippers.go:577] Response Headers:
	I0805 16:38:01.369401    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:38:01.369406    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:01 GMT
	I0805 16:38:01.369410    5521 round_trippers.go:580]     Audit-Id: 61bbab58-2729-4303-914c-2ce9a281d990
	I0805 16:38:01.369414    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:38:01.369419    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:38:01.369423    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:38:01.370558    5521 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1531"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1520","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 72029 chars]
	I0805 16:38:01.372078    5521 system_pods.go:86] 10 kube-system pods found
	I0805 16:38:01.372087    5521 system_pods.go:89] "coredns-7db6d8ff4d-fqtll" [4d8af129-475b-4185-8b0d-cbda67812964] Running
	I0805 16:38:01.372091    5521 system_pods.go:89] "etcd-multinode-985000" [8d7ca2d9-8c7b-41b9-a199-de6449107471] Running
	I0805 16:38:01.372095    5521 system_pods.go:89] "kindnet-5kfjr" [d68d8211-58f0-4a8f-904a-c6f9f530d58d] Running
	I0805 16:38:01.372098    5521 system_pods.go:89] "kindnet-tvtvg" [7dd4afe7-2a17-4298-823b-9955e43cfdb2] Running
	I0805 16:38:01.372101    5521 system_pods.go:89] "kube-apiserver-multinode-985000" [9be3378a-5fab-4907-baad-507918e714e4] Running
	I0805 16:38:01.372104    5521 system_pods.go:89] "kube-controller-manager-multinode-985000" [4ad64361-65de-4b0b-b2a3-07df18c2e603] Running
	I0805 16:38:01.372108    5521 system_pods.go:89] "kube-proxy-fwgw7" [3fb72e39-699d-4123-ae5e-e314a191d904] Running
	I0805 16:38:01.372111    5521 system_pods.go:89] "kube-proxy-s65dd" [25cd7fe5-8af2-4869-be11-1eb8c5a7ec01] Running
	I0805 16:38:01.372114    5521 system_pods.go:89] "kube-scheduler-multinode-985000" [5e23b1b7-e45d-4b43-831c-aa835c5e536d] Running
	I0805 16:38:01.372117    5521 system_pods.go:89] "storage-provisioner" [72ec8458-5c62-43eb-9120-0146e6ccaf8f] Running
	I0805 16:38:01.372121    5521 system_pods.go:126] duration metric: took 202.055662ms to wait for k8s-apps to be running ...
	I0805 16:38:01.372129    5521 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 16:38:01.372178    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:38:01.384196    5521 system_svc.go:56] duration metric: took 12.064518ms WaitForService to wait for kubelet
	I0805 16:38:01.384212    5521 kubeadm.go:582] duration metric: took 15.138595056s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:38:01.384224    5521 node_conditions.go:102] verifying NodePressure condition ...
	I0805 16:38:01.566320    5521 request.go:629] Waited for 182.003764ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I0805 16:38:01.566366    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0805 16:38:01.566373    5521 round_trippers.go:469] Request Headers:
	I0805 16:38:01.566385    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:38:01.566391    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:38:01.569209    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:38:01.569222    5521 round_trippers.go:577] Response Headers:
	I0805 16:38:01.569229    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:38:01.569238    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:01 GMT
	I0805 16:38:01.569244    5521 round_trippers.go:580]     Audit-Id: c16ec0aa-cf96-486e-a79d-d457d64a2789
	I0805 16:38:01.569248    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:38:01.569250    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:38:01.569254    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:38:01.569365    5521 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1531"},"items":[{"metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1525","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10031 chars]
	I0805 16:38:01.569754    5521 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:38:01.569766    5521 node_conditions.go:123] node cpu capacity is 2
	I0805 16:38:01.569774    5521 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:38:01.569781    5521 node_conditions.go:123] node cpu capacity is 2
	I0805 16:38:01.569787    5521 node_conditions.go:105] duration metric: took 185.55857ms to run NodePressure ...
	I0805 16:38:01.569796    5521 start.go:241] waiting for startup goroutines ...
	I0805 16:38:01.569804    5521 start.go:246] waiting for cluster config update ...
	I0805 16:38:01.569812    5521 start.go:255] writing updated cluster config ...
	I0805 16:38:01.590862    5521 out.go:177] 
	I0805 16:38:01.612868    5521 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:38:01.612983    5521 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:38:01.635442    5521 out.go:177] * Starting "multinode-985000-m02" worker node in "multinode-985000" cluster
	I0805 16:38:01.677243    5521 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:38:01.677275    5521 cache.go:56] Caching tarball of preloaded images
	I0805 16:38:01.677441    5521 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:38:01.677459    5521 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:38:01.677582    5521 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:38:01.678499    5521 start.go:360] acquireMachinesLock for multinode-985000-m02: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:38:01.678607    5521 start.go:364] duration metric: took 81.884µs to acquireMachinesLock for "multinode-985000-m02"
	I0805 16:38:01.678635    5521 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:38:01.678643    5521 fix.go:54] fixHost starting: m02
	I0805 16:38:01.679008    5521 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:38:01.679028    5521 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:38:01.688188    5521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53145
	I0805 16:38:01.688589    5521 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:38:01.688918    5521 main.go:141] libmachine: Using API Version  1
	I0805 16:38:01.688930    5521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:38:01.689133    5521 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:38:01.689265    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:38:01.689361    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetState
	I0805 16:38:01.689448    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:38:01.689523    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:38:01.690467    5521 fix.go:112] recreateIfNeeded on multinode-985000-m02: state=Stopped err=<nil>
	I0805 16:38:01.690478    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:38:01.690482    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid 4678 missing from process table
	W0805 16:38:01.690569    5521 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:38:01.711256    5521 out.go:177] * Restarting existing hyperkit VM for "multinode-985000-m02" ...
	I0805 16:38:01.732476    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .Start
	I0805 16:38:01.732792    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:38:01.732823    5521 main.go:141] libmachine: (multinode-985000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid
	I0805 16:38:01.734619    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid 4678 missing from process table
	I0805 16:38:01.734647    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | pid 4678 is in state "Stopped"
	I0805 16:38:01.734664    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid...
	I0805 16:38:01.734965    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | Using UUID ab5b9c9f-9e28-4bc2-8fcd-b98fce011173
	I0805 16:38:01.762464    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | Generated MAC a6:1c:88:9c:44:3
	I0805 16:38:01.762484    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000
	I0805 16:38:01.762607    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a6900)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0805 16:38:01.762638    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a6900)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0805 16:38:01.762681    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/multinode-985000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage,/Users/j
enkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"}
	I0805 16:38:01.762732    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ab5b9c9f-9e28-4bc2-8fcd-b98fce011173 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/multinode-985000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/mult
inode-985000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"
	I0805 16:38:01.762746    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:38:01.764220    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 DEBUG: hyperkit: Pid is 5546
	I0805 16:38:01.764724    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 0
	I0805 16:38:01.764744    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:38:01.764814    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 5546
	I0805 16:38:01.766771    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:38:01.766808    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I0805 16:38:01.766817    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b3b9}
	I0805 16:38:01.766827    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:38:01.766833    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b00c}
	I0805 16:38:01.766840    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | Found match: a6:1c:88:9c:44:3
	I0805 16:38:01.766846    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | IP: 192.169.0.14
	I0805 16:38:01.766898    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetConfigRaw
	I0805 16:38:01.767595    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:38:01.767783    5521 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:38:01.768260    5521 machine.go:94] provisionDockerMachine start ...
	I0805 16:38:01.768271    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:38:01.768389    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:01.768494    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:01.768587    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:01.768704    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:01.768800    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:01.768955    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:01.769112    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:38:01.769120    5521 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 16:38:01.772314    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:38:01.780646    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:38:01.781683    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:38:01.781725    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:38:01.781742    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:38:01.781754    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:38:02.165919    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:02 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:38:02.165934    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:02 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:38:02.281252    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:38:02.281273    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:38:02.281284    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:38:02.281293    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:38:02.282119    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:02 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:38:02.282130    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:02 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:38:07.861454    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:07 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:38:07.861538    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:07 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:38:07.861548    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:07 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:38:07.885114    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:07 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:38:12.833107    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 16:38:12.833122    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:38:12.833275    5521 buildroot.go:166] provisioning hostname "multinode-985000-m02"
	I0805 16:38:12.833287    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:38:12.833379    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:12.833467    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:12.833553    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:12.833648    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:12.833745    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:12.833872    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:12.834012    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:38:12.834021    5521 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-985000-m02 && echo "multinode-985000-m02" | sudo tee /etc/hostname
	I0805 16:38:12.899963    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-985000-m02
	
	I0805 16:38:12.899978    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:12.900133    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:12.900233    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:12.900332    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:12.900419    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:12.900559    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:12.900721    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:38:12.900732    5521 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-985000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-985000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-985000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:38:12.963291    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:38:12.963306    5521 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:38:12.963316    5521 buildroot.go:174] setting up certificates
	I0805 16:38:12.963325    5521 provision.go:84] configureAuth start
	I0805 16:38:12.963332    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:38:12.963463    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:38:12.963563    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:12.963644    5521 provision.go:143] copyHostCerts
	I0805 16:38:12.963672    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:38:12.963719    5521 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:38:12.963724    5521 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:38:12.963846    5521 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:38:12.964058    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:38:12.964088    5521 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:38:12.964093    5521 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:38:12.964171    5521 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:38:12.964327    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:38:12.964357    5521 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:38:12.964362    5521 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:38:12.964431    5521 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:38:12.964609    5521 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.multinode-985000-m02 san=[127.0.0.1 192.169.0.14 localhost minikube multinode-985000-m02]
	I0805 16:38:13.029718    5521 provision.go:177] copyRemoteCerts
	I0805 16:38:13.029767    5521 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:38:13.029782    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:13.029926    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:13.030013    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:13.030100    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:13.030195    5521 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:38:13.063868    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:38:13.063938    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:38:13.083721    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:38:13.083789    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:38:13.103391    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:38:13.103455    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0805 16:38:13.123247    5521 provision.go:87] duration metric: took 159.914588ms to configureAuth
	I0805 16:38:13.123259    5521 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:38:13.123427    5521 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:38:13.123441    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:38:13.123574    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:13.123660    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:13.123737    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:13.123827    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:13.123918    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:13.124026    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:13.124190    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:38:13.124198    5521 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:38:13.182171    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:38:13.182183    5521 buildroot.go:70] root file system type: tmpfs
	I0805 16:38:13.182268    5521 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:38:13.182279    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:13.182405    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:13.182503    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:13.182591    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:13.182683    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:13.182809    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:13.182954    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:38:13.183003    5521 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.13"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:38:13.248138    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.13
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:38:13.248155    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:13.248304    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:13.248405    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:13.248495    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:13.248573    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:13.248699    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:13.248870    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:38:13.248883    5521 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:38:14.774504    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:38:14.774518    5521 machine.go:97] duration metric: took 13.006233682s to provisionDockerMachine
	I0805 16:38:14.774527    5521 start.go:293] postStartSetup for "multinode-985000-m02" (driver="hyperkit")
	I0805 16:38:14.774535    5521 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:38:14.774546    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:38:14.774714    5521 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:38:14.774729    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:14.774827    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:14.774909    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:14.774998    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:14.775085    5521 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:38:14.816544    5521 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:38:14.820061    5521 command_runner.go:130] > NAME=Buildroot
	I0805 16:38:14.820070    5521 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0805 16:38:14.820074    5521 command_runner.go:130] > ID=buildroot
	I0805 16:38:14.820078    5521 command_runner.go:130] > VERSION_ID=2023.02.9
	I0805 16:38:14.820083    5521 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0805 16:38:14.820286    5521 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:38:14.820300    5521 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:38:14.820397    5521 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:38:14.820538    5521 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:38:14.820545    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:38:14.820707    5521 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:38:14.833566    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:38:14.861185    5521 start.go:296] duration metric: took 86.648603ms for postStartSetup
	I0805 16:38:14.861206    5521 fix.go:56] duration metric: took 13.182545662s for fixHost
	I0805 16:38:14.861238    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:14.861375    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:14.861467    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:14.861563    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:14.861652    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:14.861768    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:14.861912    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:38:14.861919    5521 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0805 16:38:14.917690    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722901094.828326920
	
	I0805 16:38:14.917701    5521 fix.go:216] guest clock: 1722901094.828326920
	I0805 16:38:14.917706    5521 fix.go:229] Guest: 2024-08-05 16:38:14.82832692 -0700 PDT Remote: 2024-08-05 16:38:14.861212 -0700 PDT m=+55.555905067 (delta=-32.88508ms)
	I0805 16:38:14.917716    5521 fix.go:200] guest clock delta is within tolerance: -32.88508ms
	I0805 16:38:14.917719    5521 start.go:83] releasing machines lock for "multinode-985000-m02", held for 13.239083998s
	I0805 16:38:14.917737    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:38:14.917864    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:38:14.938999    5521 out.go:177] * Found network options:
	I0805 16:38:14.996112    5521 out.go:177]   - NO_PROXY=192.169.0.13
	W0805 16:38:15.018259    5521 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:38:15.018300    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:38:15.019232    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:38:15.019568    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:38:15.019685    5521 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:38:15.019730    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	W0805 16:38:15.019879    5521 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:38:15.019923    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:15.019984    5521 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 16:38:15.020001    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:15.020157    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:15.020211    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:15.020380    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:15.020412    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:15.020614    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:15.020625    5521 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:38:15.020777    5521 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:38:15.053501    5521 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0805 16:38:15.053659    5521 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:38:15.053723    5521 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:38:15.098852    5521 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0805 16:38:15.098927    5521 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0805 16:38:15.098945    5521 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:38:15.098953    5521 start.go:495] detecting cgroup driver to use...
	I0805 16:38:15.099023    5521 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:38:15.113615    5521 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0805 16:38:15.113873    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:38:15.122000    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:38:15.130421    5521 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:38:15.130464    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:38:15.138622    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:38:15.146769    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:38:15.154881    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:38:15.162940    5521 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:38:15.171228    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:38:15.179545    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:38:15.187667    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:38:15.196019    5521 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:38:15.203310    5521 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0805 16:38:15.203418    5521 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:38:15.210899    5521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:38:15.315364    5521 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:38:15.333178    5521 start.go:495] detecting cgroup driver to use...
	I0805 16:38:15.333246    5521 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:38:15.351847    5521 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0805 16:38:15.352028    5521 command_runner.go:130] > [Unit]
	I0805 16:38:15.352037    5521 command_runner.go:130] > Description=Docker Application Container Engine
	I0805 16:38:15.352041    5521 command_runner.go:130] > Documentation=https://docs.docker.com
	I0805 16:38:15.352046    5521 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0805 16:38:15.352050    5521 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0805 16:38:15.352057    5521 command_runner.go:130] > StartLimitBurst=3
	I0805 16:38:15.352063    5521 command_runner.go:130] > StartLimitIntervalSec=60
	I0805 16:38:15.352066    5521 command_runner.go:130] > [Service]
	I0805 16:38:15.352070    5521 command_runner.go:130] > Type=notify
	I0805 16:38:15.352078    5521 command_runner.go:130] > Restart=on-failure
	I0805 16:38:15.352084    5521 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13
	I0805 16:38:15.352092    5521 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0805 16:38:15.352102    5521 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0805 16:38:15.352115    5521 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0805 16:38:15.352122    5521 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0805 16:38:15.352128    5521 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0805 16:38:15.352133    5521 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0805 16:38:15.352139    5521 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0805 16:38:15.352148    5521 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0805 16:38:15.352155    5521 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0805 16:38:15.352158    5521 command_runner.go:130] > ExecStart=
	I0805 16:38:15.352169    5521 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0805 16:38:15.352174    5521 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0805 16:38:15.352181    5521 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0805 16:38:15.352187    5521 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0805 16:38:15.352190    5521 command_runner.go:130] > LimitNOFILE=infinity
	I0805 16:38:15.352193    5521 command_runner.go:130] > LimitNPROC=infinity
	I0805 16:38:15.352197    5521 command_runner.go:130] > LimitCORE=infinity
	I0805 16:38:15.352202    5521 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0805 16:38:15.352209    5521 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0805 16:38:15.352215    5521 command_runner.go:130] > TasksMax=infinity
	I0805 16:38:15.352219    5521 command_runner.go:130] > TimeoutStartSec=0
	I0805 16:38:15.352224    5521 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0805 16:38:15.352229    5521 command_runner.go:130] > Delegate=yes
	I0805 16:38:15.352237    5521 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0805 16:38:15.352249    5521 command_runner.go:130] > KillMode=process
	I0805 16:38:15.352253    5521 command_runner.go:130] > [Install]
	I0805 16:38:15.352256    5521 command_runner.go:130] > WantedBy=multi-user.target
	I0805 16:38:15.352438    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:38:15.367477    5521 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:38:15.384493    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:38:15.395662    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:38:15.405888    5521 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:38:15.468063    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:38:15.478558    5521 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:38:15.493596    5521 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0805 16:38:15.493658    5521 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:38:15.496390    5521 command_runner.go:130] > /usr/bin/cri-dockerd
	I0805 16:38:15.496655    5521 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:38:15.503652    5521 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:38:15.519898    5521 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:38:15.619700    5521 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:38:15.722257    5521 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:38:15.722278    5521 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:38:15.735967    5521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:38:15.833114    5521 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:39:16.651467    5521 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0805 16:39:16.651483    5521 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0805 16:39:16.651496    5521 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.818287184s)
	I0805 16:39:16.651563    5521 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0805 16:39:16.661216    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0805 16:39:16.661228    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:13.420146905Z" level=info msg="Starting up"
	I0805 16:39:16.661236    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:13.420872507Z" level=info msg="containerd not running, starting managed containerd"
	I0805 16:39:16.661248    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:13.421358599Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=497
	I0805 16:39:16.661258    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.437602421Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0805 16:39:16.661268    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454632195Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0805 16:39:16.661294    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454680682Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0805 16:39:16.661303    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454724229Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0805 16:39:16.661313    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454738567Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0805 16:39:16.661323    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454771554Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:39:16.661333    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454832124Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:39:16.661358    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455014271Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:39:16.661368    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455053874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0805 16:39:16.661380    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455070229Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:39:16.661390    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455079145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0805 16:39:16.661401    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455109467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:39:16.661411    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455253015Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0805 16:39:16.661426    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.456861169Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:39:16.661438    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.456915956Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:39:16.661496    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457058253Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:39:16.661510    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457101847Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0805 16:39:16.661521    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457151686Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0805 16:39:16.661529    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457193291Z" level=info msg="metadata content store policy set" policy=shared
	I0805 16:39:16.661537    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457536850Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0805 16:39:16.661546    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457637715Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0805 16:39:16.661555    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457694331Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0805 16:39:16.661564    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457728855Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0805 16:39:16.661573    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457761160Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0805 16:39:16.661582    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457827388Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0805 16:39:16.661591    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458029068Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0805 16:39:16.661599    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458106036Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0805 16:39:16.661608    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458141669Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0805 16:39:16.661618    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458173056Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0805 16:39:16.661628    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458207694Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0805 16:39:16.661638    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458242036Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0805 16:39:16.661647    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458286329Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0805 16:39:16.661656    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458320625Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0805 16:39:16.661666    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458360911Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0805 16:39:16.661683    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458395522Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0805 16:39:16.661748    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458435461Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0805 16:39:16.661759    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458468994Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0805 16:39:16.661770    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458507655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661780    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458543528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661789    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458575409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661797    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458606090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661806    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458640753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661816    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458672527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661825    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458702141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661833    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458786564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661843    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458833470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661851    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458867942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661860    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458897905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661869    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458927275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661878    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458956835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661891    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458999344Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0805 16:39:16.661900    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459042185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661909    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459076838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661918    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459117163Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0805 16:39:16.661928    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459171448Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0805 16:39:16.661939    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459206426Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0805 16:39:16.661948    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459236530Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0805 16:39:16.662025    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459266816Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0805 16:39:16.662039    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459297300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.662049    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459333043Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0805 16:39:16.662058    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459365111Z" level=info msg="NRI interface is disabled by configuration."
	I0805 16:39:16.662068    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459520257Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0805 16:39:16.662076    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459589097Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0805 16:39:16.662085    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459647415Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0805 16:39:16.662098    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459731249Z" level=info msg="containerd successfully booted in 0.022632s"
	I0805 16:39:16.662106    5521 command_runner.go:130] > Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.442507541Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0805 16:39:16.662113    5521 command_runner.go:130] > Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.446047233Z" level=info msg="Loading containers: start."
	I0805 16:39:16.662134    5521 command_runner.go:130] > Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.533905829Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0805 16:39:16.662147    5521 command_runner.go:130] > Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.600469950Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0805 16:39:16.662155    5521 command_runner.go:130] > Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.643991126Z" level=info msg="Loading containers: done."
	I0805 16:39:16.662165    5521 command_runner.go:130] > Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.660081921Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	I0805 16:39:16.662172    5521 command_runner.go:130] > Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.660224037Z" level=info msg="Daemon has completed initialization"
	I0805 16:39:16.662182    5521 command_runner.go:130] > Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.679152512Z" level=info msg="API listen on /var/run/docker.sock"
	I0805 16:39:16.662188    5521 command_runner.go:130] > Aug 05 23:38:14 multinode-985000-m02 systemd[1]: Started Docker Application Container Engine.
	I0805 16:39:16.662195    5521 command_runner.go:130] > Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.679221051Z" level=info msg="API listen on [::]:2376"
	I0805 16:39:16.662203    5521 command_runner.go:130] > Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.785720729Z" level=info msg="Processing signal 'terminated'"
	I0805 16:39:16.662211    5521 command_runner.go:130] > Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.786631200Z" level=info msg="Daemon shutdown complete"
	I0805 16:39:16.662222    5521 command_runner.go:130] > Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.786734889Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0805 16:39:16.662233    5521 command_runner.go:130] > Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.786818951Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	I0805 16:39:16.662243    5521 command_runner.go:130] > Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.786854490Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0805 16:39:16.662276    5521 command_runner.go:130] > Aug 05 23:38:15 multinode-985000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0805 16:39:16.662283    5521 command_runner.go:130] > Aug 05 23:38:16 multinode-985000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0805 16:39:16.662289    5521 command_runner.go:130] > Aug 05 23:38:16 multinode-985000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0805 16:39:16.662295    5521 command_runner.go:130] > Aug 05 23:38:16 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0805 16:39:16.662302    5521 command_runner.go:130] > Aug 05 23:38:16 multinode-985000-m02 dockerd[909]: time="2024-08-05T23:38:16.819558392Z" level=info msg="Starting up"
	I0805 16:39:16.662312    5521 command_runner.go:130] > Aug 05 23:39:16 multinode-985000-m02 dockerd[909]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0805 16:39:16.662323    5521 command_runner.go:130] > Aug 05 23:39:16 multinode-985000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0805 16:39:16.662329    5521 command_runner.go:130] > Aug 05 23:39:16 multinode-985000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0805 16:39:16.662335    5521 command_runner.go:130] > Aug 05 23:39:16 multinode-985000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0805 16:39:16.687918    5521 out.go:177] 
	W0805 16:39:16.708897    5521 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 05 23:38:13 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:38:13 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:13.420146905Z" level=info msg="Starting up"
	Aug 05 23:38:13 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:13.420872507Z" level=info msg="containerd not running, starting managed containerd"
	Aug 05 23:38:13 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:13.421358599Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=497
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.437602421Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454632195Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454680682Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454724229Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454738567Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454771554Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454832124Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455014271Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455053874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455070229Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455079145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455109467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455253015Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.456861169Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.456915956Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457058253Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457101847Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457151686Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457193291Z" level=info msg="metadata content store policy set" policy=shared
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457536850Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457637715Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457694331Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457728855Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457761160Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457827388Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458029068Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458106036Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458141669Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458173056Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458207694Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458242036Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458286329Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458320625Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458360911Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458395522Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458435461Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458468994Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458507655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458543528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458575409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458606090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458640753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458672527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458702141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458786564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458833470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458867942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458897905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458927275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458956835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458999344Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459042185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459076838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459117163Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459171448Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459206426Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459236530Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459266816Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459297300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459333043Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459365111Z" level=info msg="NRI interface is disabled by configuration."
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459520257Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459589097Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459647415Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459731249Z" level=info msg="containerd successfully booted in 0.022632s"
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.442507541Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.446047233Z" level=info msg="Loading containers: start."
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.533905829Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.600469950Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.643991126Z" level=info msg="Loading containers: done."
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.660081921Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.660224037Z" level=info msg="Daemon has completed initialization"
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.679152512Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 05 23:38:14 multinode-985000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.679221051Z" level=info msg="API listen on [::]:2376"
	Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.785720729Z" level=info msg="Processing signal 'terminated'"
	Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.786631200Z" level=info msg="Daemon shutdown complete"
	Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.786734889Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.786818951Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.786854490Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 05 23:38:15 multinode-985000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 05 23:38:16 multinode-985000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 05 23:38:16 multinode-985000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 05 23:38:16 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:38:16 multinode-985000-m02 dockerd[909]: time="2024-08-05T23:38:16.819558392Z" level=info msg="Starting up"
	Aug 05 23:39:16 multinode-985000-m02 dockerd[909]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 05 23:39:16 multinode-985000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 05 23:39:16 multinode-985000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 05 23:39:16 multinode-985000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 05 23:38:13 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:38:13 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:13.420146905Z" level=info msg="Starting up"
	Aug 05 23:38:13 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:13.420872507Z" level=info msg="containerd not running, starting managed containerd"
	Aug 05 23:38:13 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:13.421358599Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=497
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.437602421Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454632195Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454680682Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454724229Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454738567Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454771554Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454832124Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455014271Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455053874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455070229Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455079145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455109467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455253015Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.456861169Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.456915956Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457058253Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457101847Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457151686Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457193291Z" level=info msg="metadata content store policy set" policy=shared
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457536850Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457637715Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457694331Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457728855Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457761160Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457827388Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458029068Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458106036Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458141669Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458173056Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458207694Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458242036Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458286329Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458320625Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458360911Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458395522Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458435461Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458468994Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458507655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458543528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458575409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458606090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458640753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458672527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458702141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458786564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458833470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458867942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458897905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458927275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458956835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458999344Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459042185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459076838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459117163Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459171448Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459206426Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459236530Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459266816Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459297300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459333043Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459365111Z" level=info msg="NRI interface is disabled by configuration."
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459520257Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459589097Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459647415Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459731249Z" level=info msg="containerd successfully booted in 0.022632s"
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.442507541Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.446047233Z" level=info msg="Loading containers: start."
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.533905829Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.600469950Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.643991126Z" level=info msg="Loading containers: done."
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.660081921Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.660224037Z" level=info msg="Daemon has completed initialization"
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.679152512Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 05 23:38:14 multinode-985000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.679221051Z" level=info msg="API listen on [::]:2376"
	Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.785720729Z" level=info msg="Processing signal 'terminated'"
	Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.786631200Z" level=info msg="Daemon shutdown complete"
	Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.786734889Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.786818951Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.786854490Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 05 23:38:15 multinode-985000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 05 23:38:16 multinode-985000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 05 23:38:16 multinode-985000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 05 23:38:16 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:38:16 multinode-985000-m02 dockerd[909]: time="2024-08-05T23:38:16.819558392Z" level=info msg="Starting up"
	Aug 05 23:39:16 multinode-985000-m02 dockerd[909]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 05 23:39:16 multinode-985000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 05 23:39:16 multinode-985000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 05 23:39:16 multinode-985000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0805 16:39:16.709036    5521 out.go:239] * 
	* 
	W0805 16:39:16.710224    5521 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:39:16.772583    5521 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-amd64 node list -p multinode-985000" : exit status 90
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-985000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-985000 logs -n 25: (2.905136203s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| kubectl | -p multinode-985000 -- get pods -o   | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o   | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o   | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o   | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o   | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o   | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o   | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o   | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec          | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g --           |                  |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec          | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT |                     |
	|         | busybox-fc5497c4f-ptd5b --           |                  |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec          | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g --           |                  |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec          | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT |                     |
	|         | busybox-fc5497c4f-ptd5b --           |                  |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec          | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g -- nslookup  |                  |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec          | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT |                     |
	|         | busybox-fc5497c4f-ptd5b -- nslookup  |                  |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o   | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec          | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g              |                  |         |         |                     |                     |
	|         | -- sh -c nslookup                    |                  |         |         |                     |                     |
	|         | host.minikube.internal | awk         |                  |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec          | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g -- sh        |                  |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec          | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT |                     |
	|         | busybox-fc5497c4f-ptd5b              |                  |         |         |                     |                     |
	|         | -- sh -c nslookup                    |                  |         |         |                     |                     |
	|         | host.minikube.internal | awk         |                  |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                  |         |         |                     |                     |
	| node    | add -p multinode-985000 -v 3         | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:35 PDT |
	|         | --alsologtostderr                    |                  |         |         |                     |                     |
	| node    | multinode-985000 node stop m03       | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:35 PDT | 05 Aug 24 16:35 PDT |
	| node    | multinode-985000 node start          | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:35 PDT | 05 Aug 24 16:36 PDT |
	|         | m03 -v=7 --alsologtostderr           |                  |         |         |                     |                     |
	| node    | list -p multinode-985000             | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:36 PDT |                     |
	| stop    | -p multinode-985000                  | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:36 PDT | 05 Aug 24 16:37 PDT |
	| start   | -p multinode-985000                  | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:37 PDT |                     |
	|         | --wait=true -v=8                     |                  |         |         |                     |                     |
	|         | --alsologtostderr                    |                  |         |         |                     |                     |
	| node    | list -p multinode-985000             | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:39 PDT |                     |
	|---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 16:37:19
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 16:37:19.344110    5521 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:37:19.344466    5521 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:37:19.344474    5521 out.go:304] Setting ErrFile to fd 2...
	I0805 16:37:19.344479    5521 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:37:19.344702    5521 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:37:19.346290    5521 out.go:298] Setting JSON to false
	I0805 16:37:19.368484    5521 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":4010,"bootTime":1722897029,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0805 16:37:19.368574    5521 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:37:19.390244    5521 out.go:177] * [multinode-985000] minikube v1.33.1 on Darwin 14.5
	I0805 16:37:19.432083    5521 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:37:19.432145    5521 notify.go:220] Checking for updates...
	I0805 16:37:19.474965    5521 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:37:19.495989    5521 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0805 16:37:19.517187    5521 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:37:19.537983    5521 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:37:19.558962    5521 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:37:19.580823    5521 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:37:19.580992    5521 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:37:19.581649    5521 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:37:19.581721    5521 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:37:19.591086    5521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53115
	I0805 16:37:19.591452    5521 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:37:19.591907    5521 main.go:141] libmachine: Using API Version  1
	I0805 16:37:19.591915    5521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:37:19.592186    5521 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:37:19.592316    5521 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:37:19.621203    5521 out.go:177] * Using the hyperkit driver based on existing profile
	I0805 16:37:19.663060    5521 start.go:297] selected driver: hyperkit
	I0805 16:37:19.663084    5521 start.go:901] validating driver "hyperkit" against &{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.15 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:37:19.663335    5521 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:37:19.663521    5521 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:37:19.663719    5521 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19373-1122/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0805 16:37:19.672949    5521 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0805 16:37:19.676917    5521 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:37:19.676939    5521 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0805 16:37:19.679650    5521 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:37:19.679719    5521 cni.go:84] Creating CNI manager for ""
	I0805 16:37:19.679731    5521 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0805 16:37:19.679807    5521 start.go:340] cluster config:
	{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.15 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:
false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:37:19.679904    5521 iso.go:125] acquiring lock: {Name:mk71e8d40232ece83c91dc82184f03ab93aee56e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:37:19.721789    5521 out.go:177] * Starting "multinode-985000" primary control-plane node in "multinode-985000" cluster
	I0805 16:37:19.742954    5521 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:37:19.743026    5521 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0805 16:37:19.743048    5521 cache.go:56] Caching tarball of preloaded images
	I0805 16:37:19.743247    5521 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:37:19.743265    5521 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:37:19.743456    5521 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:37:19.744298    5521 start.go:360] acquireMachinesLock for multinode-985000: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:37:19.744469    5521 start.go:364] duration metric: took 148.41µs to acquireMachinesLock for "multinode-985000"
	I0805 16:37:19.744508    5521 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:37:19.744520    5521 fix.go:54] fixHost starting: 
	I0805 16:37:19.744954    5521 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:37:19.744979    5521 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:37:19.753692    5521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53117
	I0805 16:37:19.754053    5521 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:37:19.754374    5521 main.go:141] libmachine: Using API Version  1
	I0805 16:37:19.754383    5521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:37:19.754660    5521 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:37:19.754807    5521 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:37:19.754921    5521 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:37:19.755005    5521 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:37:19.755109    5521 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:37:19.755997    5521 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid 4651 missing from process table
	I0805 16:37:19.756024    5521 fix.go:112] recreateIfNeeded on multinode-985000: state=Stopped err=<nil>
	I0805 16:37:19.756039    5521 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	W0805 16:37:19.756134    5521 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:37:19.797962    5521 out.go:177] * Restarting existing hyperkit VM for "multinode-985000" ...
	I0805 16:37:19.821296    5521 main.go:141] libmachine: (multinode-985000) Calling .Start
	I0805 16:37:19.821573    5521 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:37:19.821663    5521 main.go:141] libmachine: (multinode-985000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid
	I0805 16:37:19.823405    5521 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid 4651 missing from process table
	I0805 16:37:19.823427    5521 main.go:141] libmachine: (multinode-985000) DBG | pid 4651 is in state "Stopped"
	I0805 16:37:19.823442    5521 main.go:141] libmachine: (multinode-985000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid...
	I0805 16:37:19.823689    5521 main.go:141] libmachine: (multinode-985000) DBG | Using UUID 3ac698fc-f622-443b-898d-9b152fa64288
	I0805 16:37:19.935040    5521 main.go:141] libmachine: (multinode-985000) DBG | Generated MAC e2:6:14:d2:13:ae
	I0805 16:37:19.935070    5521 main.go:141] libmachine: (multinode-985000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000
	I0805 16:37:19.935187    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3ac698fc-f622-443b-898d-9b152fa64288", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a67e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0805 16:37:19.935220    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3ac698fc-f622-443b-898d-9b152fa64288", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a67e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0805 16:37:19.935274    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "3ac698fc-f622-443b-898d-9b152fa64288", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/multinode-985000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage,/Users/jenkins/minikube-integration/1937
3-1122/.minikube/machines/multinode-985000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"}
	I0805 16:37:19.935303    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 3ac698fc-f622-443b-898d-9b152fa64288 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/multinode-985000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"
	I0805 16:37:19.935323    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:37:19.936734    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 DEBUG: hyperkit: Pid is 5533
	I0805 16:37:19.937092    5521 main.go:141] libmachine: (multinode-985000) DBG | Attempt 0
	I0805 16:37:19.937106    5521 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:37:19.937205    5521 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 5533
	I0805 16:37:19.939053    5521 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:37:19.939115    5521 main.go:141] libmachine: (multinode-985000) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I0805 16:37:19.939146    5521 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:37:19.939167    5521 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b00c}
	I0805 16:37:19.939179    5521 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:37:19.939190    5521 main.go:141] libmachine: (multinode-985000) DBG | Found match: e2:6:14:d2:13:ae
	I0805 16:37:19.939202    5521 main.go:141] libmachine: (multinode-985000) DBG | IP: 192.169.0.13
	I0805 16:37:19.939251    5521 main.go:141] libmachine: (multinode-985000) Calling .GetConfigRaw
	I0805 16:37:19.939918    5521 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:37:19.940105    5521 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:37:19.940507    5521 machine.go:94] provisionDockerMachine start ...
	I0805 16:37:19.940521    5521 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:37:19.940712    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:19.940833    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:19.940944    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:19.941063    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:19.941184    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:19.941317    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:37:19.941534    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:37:19.941543    5521 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 16:37:19.945439    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:37:19.998236    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:37:19.999189    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:37:19.999209    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:37:19.999217    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:37:19.999225    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:37:20.381357    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:37:20.381372    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:37:20.495827    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:37:20.495847    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:37:20.495864    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:37:20.495880    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:37:20.496727    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:37:20.496740    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:37:26.053033    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:26 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0805 16:37:26.053095    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:26 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0805 16:37:26.053106    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:26 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0805 16:37:26.078427    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:26 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0805 16:37:31.014343    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 16:37:31.014358    5521 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:37:31.014500    5521 buildroot.go:166] provisioning hostname "multinode-985000"
	I0805 16:37:31.014511    5521 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:37:31.014618    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:31.014720    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:31.014844    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.014943    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.015061    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:31.015194    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:37:31.015348    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:37:31.015359    5521 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-985000 && echo "multinode-985000" | sudo tee /etc/hostname
	I0805 16:37:31.093711    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-985000
	
	I0805 16:37:31.093738    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:31.093873    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:31.093973    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.094065    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.094154    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:31.094291    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:37:31.094436    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:37:31.094447    5521 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-985000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-985000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-985000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:37:31.166381    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:37:31.166401    5521 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:37:31.166420    5521 buildroot.go:174] setting up certificates
	I0805 16:37:31.166425    5521 provision.go:84] configureAuth start
	I0805 16:37:31.166432    5521 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:37:31.166566    5521 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:37:31.166671    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:31.166751    5521 provision.go:143] copyHostCerts
	I0805 16:37:31.166779    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:37:31.166848    5521 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:37:31.166856    5521 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:37:31.167016    5521 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:37:31.167224    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:37:31.167266    5521 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:37:31.167271    5521 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:37:31.167361    5521 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:37:31.167503    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:37:31.167542    5521 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:37:31.167553    5521 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:37:31.167640    5521 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:37:31.167799    5521 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.multinode-985000 san=[127.0.0.1 192.169.0.13 localhost minikube multinode-985000]
	I0805 16:37:31.333929    5521 provision.go:177] copyRemoteCerts
	I0805 16:37:31.333986    5521 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:37:31.334003    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:31.334141    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:31.334246    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.334341    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:31.334442    5521 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:37:31.373502    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:37:31.373592    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:37:31.393275    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:37:31.393333    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0805 16:37:31.412894    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:37:31.412951    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:37:31.432545    5521 provision.go:87] duration metric: took 266.106701ms to configureAuth
	I0805 16:37:31.432558    5521 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:37:31.432725    5521 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:37:31.432742    5521 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:37:31.432881    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:31.432989    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:31.433084    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.433176    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.433269    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:31.433395    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:37:31.433519    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:37:31.433527    5521 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:37:31.498617    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:37:31.498629    5521 buildroot.go:70] root file system type: tmpfs
	I0805 16:37:31.498708    5521 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:37:31.498721    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:31.498863    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:31.498974    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.499071    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.499155    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:31.499273    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:37:31.499401    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:37:31.499448    5521 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:37:31.575743    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:37:31.575771    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:31.575913    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:31.576016    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.576109    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.576205    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:31.576341    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:37:31.576481    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:37:31.576493    5521 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:37:33.234695    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:37:33.234711    5521 machine.go:97] duration metric: took 13.294178335s to provisionDockerMachine
	I0805 16:37:33.234727    5521 start.go:293] postStartSetup for "multinode-985000" (driver="hyperkit")
	I0805 16:37:33.234735    5521 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:37:33.234747    5521 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:37:33.234933    5521 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:37:33.234947    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:33.235048    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:33.235138    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:33.235219    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:33.235304    5521 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:37:33.276364    5521 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:37:33.279613    5521 command_runner.go:130] > NAME=Buildroot
	I0805 16:37:33.279624    5521 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0805 16:37:33.279629    5521 command_runner.go:130] > ID=buildroot
	I0805 16:37:33.279635    5521 command_runner.go:130] > VERSION_ID=2023.02.9
	I0805 16:37:33.279641    5521 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0805 16:37:33.279904    5521 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:37:33.279915    5521 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:37:33.280022    5521 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:37:33.280208    5521 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:37:33.280215    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:37:33.280420    5521 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:37:33.289381    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:37:33.319551    5521 start.go:296] duration metric: took 84.814531ms for postStartSetup
	I0805 16:37:33.319580    5521 fix.go:56] duration metric: took 13.575045291s for fixHost
	I0805 16:37:33.319592    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:33.319764    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:33.319879    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:33.319970    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:33.320074    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:33.320209    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:37:33.320347    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:37:33.320353    5521 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 16:37:33.386078    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722901053.539565012
	
	I0805 16:37:33.386090    5521 fix.go:216] guest clock: 1722901053.539565012
	I0805 16:37:33.386095    5521 fix.go:229] Guest: 2024-08-05 16:37:33.539565012 -0700 PDT Remote: 2024-08-05 16:37:33.319583 -0700 PDT m=+14.014329761 (delta=219.982012ms)
	I0805 16:37:33.386114    5521 fix.go:200] guest clock delta is within tolerance: 219.982012ms
	I0805 16:37:33.386118    5521 start.go:83] releasing machines lock for "multinode-985000", held for 13.641620815s
	I0805 16:37:33.386138    5521 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:37:33.386279    5521 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:37:33.386394    5521 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:37:33.386730    5521 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:37:33.386845    5521 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:37:33.386917    5521 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:37:33.386942    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:33.387003    5521 ssh_runner.go:195] Run: cat /version.json
	I0805 16:37:33.387017    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:33.387030    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:33.387128    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:33.387144    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:33.387234    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:33.387245    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:33.387325    5521 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:37:33.387345    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:33.387431    5521 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:37:33.421764    5521 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0805 16:37:33.421883    5521 ssh_runner.go:195] Run: systemctl --version
	I0805 16:37:33.467550    5521 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0805 16:37:33.468651    5521 command_runner.go:130] > systemd 252 (252)
	I0805 16:37:33.468690    5521 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0805 16:37:33.468805    5521 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 16:37:33.473715    5521 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0805 16:37:33.473736    5521 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:37:33.473771    5521 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:37:33.487255    5521 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0805 16:37:33.487298    5521 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:37:33.487311    5521 start.go:495] detecting cgroup driver to use...
	I0805 16:37:33.487409    5521 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:37:33.501851    5521 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0805 16:37:33.502107    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:37:33.510909    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:37:33.519656    5521 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:37:33.519696    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:37:33.528321    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:37:33.536918    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:37:33.545942    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:37:33.554600    5521 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:37:33.563425    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:37:33.572074    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:37:33.580764    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:37:33.589491    5521 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:37:33.597187    5521 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0805 16:37:33.597327    5521 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:37:33.605146    5521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:37:33.699080    5521 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:37:33.715293    5521 start.go:495] detecting cgroup driver to use...
	I0805 16:37:33.715372    5521 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:37:33.725461    5521 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0805 16:37:33.725955    5521 command_runner.go:130] > [Unit]
	I0805 16:37:33.725965    5521 command_runner.go:130] > Description=Docker Application Container Engine
	I0805 16:37:33.725969    5521 command_runner.go:130] > Documentation=https://docs.docker.com
	I0805 16:37:33.725974    5521 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0805 16:37:33.725979    5521 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0805 16:37:33.725989    5521 command_runner.go:130] > StartLimitBurst=3
	I0805 16:37:33.725993    5521 command_runner.go:130] > StartLimitIntervalSec=60
	I0805 16:37:33.725997    5521 command_runner.go:130] > [Service]
	I0805 16:37:33.726001    5521 command_runner.go:130] > Type=notify
	I0805 16:37:33.726005    5521 command_runner.go:130] > Restart=on-failure
	I0805 16:37:33.726011    5521 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0805 16:37:33.726019    5521 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0805 16:37:33.726025    5521 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0805 16:37:33.726031    5521 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0805 16:37:33.726036    5521 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0805 16:37:33.726042    5521 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0805 16:37:33.726048    5521 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0805 16:37:33.726063    5521 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0805 16:37:33.726069    5521 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0805 16:37:33.726075    5521 command_runner.go:130] > ExecStart=
	I0805 16:37:33.726090    5521 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0805 16:37:33.726094    5521 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0805 16:37:33.726100    5521 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0805 16:37:33.726107    5521 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0805 16:37:33.726111    5521 command_runner.go:130] > LimitNOFILE=infinity
	I0805 16:37:33.726115    5521 command_runner.go:130] > LimitNPROC=infinity
	I0805 16:37:33.726121    5521 command_runner.go:130] > LimitCORE=infinity
	I0805 16:37:33.726127    5521 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0805 16:37:33.726132    5521 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0805 16:37:33.726137    5521 command_runner.go:130] > TasksMax=infinity
	I0805 16:37:33.726141    5521 command_runner.go:130] > TimeoutStartSec=0
	I0805 16:37:33.726158    5521 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0805 16:37:33.726161    5521 command_runner.go:130] > Delegate=yes
	I0805 16:37:33.726166    5521 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0805 16:37:33.726170    5521 command_runner.go:130] > KillMode=process
	I0805 16:37:33.726173    5521 command_runner.go:130] > [Install]
	I0805 16:37:33.726181    5521 command_runner.go:130] > WantedBy=multi-user.target
	I0805 16:37:33.726297    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:37:33.737088    5521 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:37:33.751275    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:37:33.762646    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:37:33.773482    5521 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:37:33.799587    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:37:33.810018    5521 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:37:33.824851    5521 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0805 16:37:33.825036    5521 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:37:33.828060    5521 command_runner.go:130] > /usr/bin/cri-dockerd
	I0805 16:37:33.828191    5521 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:37:33.835356    5521 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:37:33.848939    5521 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:37:33.941490    5521 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:37:34.038935    5521 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:37:34.039041    5521 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:37:34.053894    5521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:37:34.163116    5521 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:37:36.488671    5521 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.32553387s)
	I0805 16:37:36.488731    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0805 16:37:36.499891    5521 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0805 16:37:36.512512    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:37:36.522638    5521 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0805 16:37:36.618869    5521 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0805 16:37:36.714175    5521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:37:36.811543    5521 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0805 16:37:36.825669    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:37:36.836762    5521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:37:36.945275    5521 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0805 16:37:37.004002    5521 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0805 16:37:37.004108    5521 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0805 16:37:37.008235    5521 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0805 16:37:37.008254    5521 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0805 16:37:37.008260    5521 command_runner.go:130] > Device: 0,22	Inode: 751         Links: 1
	I0805 16:37:37.008265    5521 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0805 16:37:37.008270    5521 command_runner.go:130] > Access: 2024-08-05 23:37:37.112441730 +0000
	I0805 16:37:37.008274    5521 command_runner.go:130] > Modify: 2024-08-05 23:37:37.112441730 +0000
	I0805 16:37:37.008280    5521 command_runner.go:130] > Change: 2024-08-05 23:37:37.113441659 +0000
	I0805 16:37:37.008283    5521 command_runner.go:130] >  Birth: -
	I0805 16:37:37.008458    5521 start.go:563] Will wait 60s for crictl version
	I0805 16:37:37.008503    5521 ssh_runner.go:195] Run: which crictl
	I0805 16:37:37.011447    5521 command_runner.go:130] > /usr/bin/crictl
	I0805 16:37:37.011673    5521 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 16:37:37.037547    5521 command_runner.go:130] > Version:  0.1.0
	I0805 16:37:37.037560    5521 command_runner.go:130] > RuntimeName:  docker
	I0805 16:37:37.037564    5521 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0805 16:37:37.037568    5521 command_runner.go:130] > RuntimeApiVersion:  v1
	I0805 16:37:37.038675    5521 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0805 16:37:37.038749    5521 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:37:37.056467    5521 command_runner.go:130] > 27.1.1
	I0805 16:37:37.057465    5521 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:37:37.074514    5521 command_runner.go:130] > 27.1.1
	I0805 16:37:37.099565    5521 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0805 16:37:37.099612    5521 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:37:37.099970    5521 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0805 16:37:37.104644    5521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:37:37.114271    5521 kubeadm.go:883] updating cluster {Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.15 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 16:37:37.114369    5521 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:37:37.114424    5521 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:37:37.126439    5521 command_runner.go:130] > kindest/kindnetd:v20240730-75a5af0c
	I0805 16:37:37.126453    5521 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0805 16:37:37.126458    5521 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0805 16:37:37.126462    5521 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0805 16:37:37.126465    5521 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0805 16:37:37.126469    5521 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0805 16:37:37.126473    5521 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0805 16:37:37.126477    5521 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0805 16:37:37.126481    5521 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:37:37.126485    5521 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0805 16:37:37.127412    5521 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240730-75a5af0c
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0805 16:37:37.127420    5521 docker.go:615] Images already preloaded, skipping extraction
	I0805 16:37:37.127486    5521 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:37:37.146140    5521 command_runner.go:130] > kindest/kindnetd:v20240730-75a5af0c
	I0805 16:37:37.146154    5521 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0805 16:37:37.146159    5521 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0805 16:37:37.146163    5521 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0805 16:37:37.146167    5521 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0805 16:37:37.146170    5521 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0805 16:37:37.146174    5521 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0805 16:37:37.146179    5521 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0805 16:37:37.146182    5521 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:37:37.146186    5521 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0805 16:37:37.146679    5521 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240730-75a5af0c
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0805 16:37:37.146698    5521 cache_images.go:84] Images are preloaded, skipping loading
	I0805 16:37:37.146707    5521 kubeadm.go:934] updating node { 192.169.0.13 8443 v1.30.3 docker true true} ...
	I0805 16:37:37.146784    5521 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-985000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 16:37:37.146863    5521 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0805 16:37:37.182908    5521 command_runner.go:130] > cgroupfs
	I0805 16:37:37.183498    5521 cni.go:84] Creating CNI manager for ""
	I0805 16:37:37.183509    5521 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0805 16:37:37.183518    5521 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 16:37:37.183536    5521 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.13 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-985000 NodeName:multinode-985000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 16:37:37.183619    5521 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-985000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 16:37:37.183677    5521 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 16:37:37.192063    5521 command_runner.go:130] > kubeadm
	I0805 16:37:37.192073    5521 command_runner.go:130] > kubectl
	I0805 16:37:37.192078    5521 command_runner.go:130] > kubelet
	I0805 16:37:37.192202    5521 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 16:37:37.192247    5521 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 16:37:37.200175    5521 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0805 16:37:37.213737    5521 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 16:37:37.227101    5521 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0805 16:37:37.240845    5521 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I0805 16:37:37.243830    5521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:37:37.253870    5521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:37:37.350271    5521 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:37:37.365726    5521 certs.go:68] Setting up /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000 for IP: 192.169.0.13
	I0805 16:37:37.365744    5521 certs.go:194] generating shared ca certs ...
	I0805 16:37:37.365760    5521 certs.go:226] acquiring lock for ca certs: {Name:mkb83e058d89c7d4e66f4136f377a3c305b13735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:37:37.366000    5521 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key
	I0805 16:37:37.366088    5521 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key
	I0805 16:37:37.366102    5521 certs.go:256] generating profile certs ...
	I0805 16:37:37.366219    5521 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key
	I0805 16:37:37.366302    5521 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec
	I0805 16:37:37.366434    5521 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key
	I0805 16:37:37.366447    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 16:37:37.366477    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 16:37:37.366498    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 16:37:37.366518    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 16:37:37.366537    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 16:37:37.366569    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 16:37:37.366600    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 16:37:37.366630    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 16:37:37.366732    5521 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem (1338 bytes)
	W0805 16:37:37.366808    5521 certs.go:480] ignoring /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678_empty.pem, impossibly tiny 0 bytes
	I0805 16:37:37.366821    5521 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 16:37:37.366859    5521 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem (1082 bytes)
	I0805 16:37:37.366891    5521 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem (1123 bytes)
	I0805 16:37:37.366923    5521 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem (1675 bytes)
	I0805 16:37:37.366996    5521 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:37:37.367034    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:37:37.367064    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem -> /usr/share/ca-certificates/1678.pem
	I0805 16:37:37.367086    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /usr/share/ca-certificates/16782.pem
	I0805 16:37:37.367546    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 16:37:37.395681    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 16:37:37.414513    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 16:37:37.433690    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0805 16:37:37.452500    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0805 16:37:37.472109    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 16:37:37.491753    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 16:37:37.511029    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 16:37:37.530071    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 16:37:37.549206    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem --> /usr/share/ca-certificates/1678.pem (1338 bytes)
	I0805 16:37:37.568348    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /usr/share/ca-certificates/16782.pem (1708 bytes)
	I0805 16:37:37.587345    5521 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 16:37:37.600856    5521 ssh_runner.go:195] Run: openssl version
	I0805 16:37:37.605037    5521 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0805 16:37:37.605082    5521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 16:37:37.614106    5521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:37:37.617312    5521 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:37:37.617414    5521 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:37:37.617448    5521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:37:37.621389    5521 command_runner.go:130] > b5213941
	I0805 16:37:37.621569    5521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 16:37:37.630682    5521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1678.pem && ln -fs /usr/share/ca-certificates/1678.pem /etc/ssl/certs/1678.pem"
	I0805 16:37:37.639868    5521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1678.pem
	I0805 16:37:37.643124    5521 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  5 22:58 /usr/share/ca-certificates/1678.pem
	I0805 16:37:37.643203    5521 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 22:58 /usr/share/ca-certificates/1678.pem
	I0805 16:37:37.643234    5521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1678.pem
	I0805 16:37:37.647330    5521 command_runner.go:130] > 51391683
	I0805 16:37:37.647529    5521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1678.pem /etc/ssl/certs/51391683.0"
	I0805 16:37:37.656868    5521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16782.pem && ln -fs /usr/share/ca-certificates/16782.pem /etc/ssl/certs/16782.pem"
	I0805 16:37:37.665981    5521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16782.pem
	I0805 16:37:37.669370    5521 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  5 22:58 /usr/share/ca-certificates/16782.pem
	I0805 16:37:37.669486    5521 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 22:58 /usr/share/ca-certificates/16782.pem
	I0805 16:37:37.669522    5521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16782.pem
	I0805 16:37:37.673595    5521 command_runner.go:130] > 3ec20f2e
	I0805 16:37:37.673823    5521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16782.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 16:37:37.683082    5521 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 16:37:37.686344    5521 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 16:37:37.686356    5521 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0805 16:37:37.686361    5521 command_runner.go:130] > Device: 253,1	Inode: 3149128     Links: 1
	I0805 16:37:37.686366    5521 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0805 16:37:37.686371    5521 command_runner.go:130] > Access: 2024-08-05 23:20:58.401066212 +0000
	I0805 16:37:37.686375    5521 command_runner.go:130] > Modify: 2024-08-05 23:20:58.401066212 +0000
	I0805 16:37:37.686399    5521 command_runner.go:130] > Change: 2024-08-05 23:20:58.401066212 +0000
	I0805 16:37:37.686409    5521 command_runner.go:130] >  Birth: 2024-08-05 23:20:58.401066212 +0000
	I0805 16:37:37.686482    5521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 16:37:37.690751    5521 command_runner.go:130] > Certificate will not expire
	I0805 16:37:37.690873    5521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 16:37:37.695013    5521 command_runner.go:130] > Certificate will not expire
	I0805 16:37:37.695212    5521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 16:37:37.700369    5521 command_runner.go:130] > Certificate will not expire
	I0805 16:37:37.700476    5521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 16:37:37.704551    5521 command_runner.go:130] > Certificate will not expire
	I0805 16:37:37.704708    5521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 16:37:37.708755    5521 command_runner.go:130] > Certificate will not expire
	I0805 16:37:37.708896    5521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 16:37:37.713109    5521 command_runner.go:130] > Certificate will not expire
	I0805 16:37:37.713257    5521 kubeadm.go:392] StartCluster: {Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.15 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:37:37.713368    5521 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 16:37:37.727282    5521 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 16:37:37.735614    5521 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0805 16:37:37.735623    5521 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0805 16:37:37.735628    5521 command_runner.go:130] > /var/lib/minikube/etcd:
	I0805 16:37:37.735631    5521 command_runner.go:130] > member
	I0805 16:37:37.735761    5521 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 16:37:37.735771    5521 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 16:37:37.735817    5521 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 16:37:37.743915    5521 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 16:37:37.744222    5521 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-985000" does not appear in /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:37:37.744310    5521 kubeconfig.go:62] /Users/jenkins/minikube-integration/19373-1122/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-985000" cluster setting kubeconfig missing "multinode-985000" context setting]
	I0805 16:37:37.744520    5521 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/kubeconfig: {Name:mk2a0d8b4d330b3c26432fc65d015ddf98a9cc93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:37:37.745178    5521 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:37:37.745371    5521 kapi.go:59] client config for multinode-985000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa6d2060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:37:37.745697    5521 cert_rotation.go:137] Starting client certificate rotation controller
	I0805 16:37:37.745867    5521 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 16:37:37.753787    5521 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.13
	I0805 16:37:37.753807    5521 kubeadm.go:1160] stopping kube-system containers ...
	I0805 16:37:37.753864    5521 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 16:37:37.767689    5521 command_runner.go:130] > c9365aec3389
	I0805 16:37:37.767700    5521 command_runner.go:130] > 3d9fd612d0b1
	I0805 16:37:37.767703    5521 command_runner.go:130] > 2a8cd74365e9
	I0805 16:37:37.767706    5521 command_runner.go:130] > 35b9ac42edc0
	I0805 16:37:37.767710    5521 command_runner.go:130] > 724e5cfab0a2
	I0805 16:37:37.767713    5521 command_runner.go:130] > d58ca48f9f8b
	I0805 16:37:37.767717    5521 command_runner.go:130] > 65a1122097f0
	I0805 16:37:37.767720    5521 command_runner.go:130] > c91338eb0e13
	I0805 16:37:37.767729    5521 command_runner.go:130] > 792feba1a6f6
	I0805 16:37:37.767733    5521 command_runner.go:130] > 1fdd85b796ab
	I0805 16:37:37.767739    5521 command_runner.go:130] > d11865076c64
	I0805 16:37:37.767743    5521 command_runner.go:130] > 608878b33f35
	I0805 16:37:37.767746    5521 command_runner.go:130] > c86e04eb7823
	I0805 16:37:37.767749    5521 command_runner.go:130] > 55a20063845e
	I0805 16:37:37.767753    5521 command_runner.go:130] > b58900db5299
	I0805 16:37:37.767756    5521 command_runner.go:130] > 569788c2699f
	I0805 16:37:37.768462    5521 docker.go:483] Stopping containers: [c9365aec3389 3d9fd612d0b1 2a8cd74365e9 35b9ac42edc0 724e5cfab0a2 d58ca48f9f8b 65a1122097f0 c91338eb0e13 792feba1a6f6 1fdd85b796ab d11865076c64 608878b33f35 c86e04eb7823 55a20063845e b58900db5299 569788c2699f]
	I0805 16:37:37.768536    5521 ssh_runner.go:195] Run: docker stop c9365aec3389 3d9fd612d0b1 2a8cd74365e9 35b9ac42edc0 724e5cfab0a2 d58ca48f9f8b 65a1122097f0 c91338eb0e13 792feba1a6f6 1fdd85b796ab d11865076c64 608878b33f35 c86e04eb7823 55a20063845e b58900db5299 569788c2699f
	I0805 16:37:37.780204    5521 command_runner.go:130] > c9365aec3389
	I0805 16:37:37.781733    5521 command_runner.go:130] > 3d9fd612d0b1
	I0805 16:37:37.781870    5521 command_runner.go:130] > 2a8cd74365e9
	I0805 16:37:37.781981    5521 command_runner.go:130] > 35b9ac42edc0
	I0805 16:37:37.782219    5521 command_runner.go:130] > 724e5cfab0a2
	I0805 16:37:37.782404    5521 command_runner.go:130] > d58ca48f9f8b
	I0805 16:37:37.782493    5521 command_runner.go:130] > 65a1122097f0
	I0805 16:37:37.783962    5521 command_runner.go:130] > c91338eb0e13
	I0805 16:37:37.783968    5521 command_runner.go:130] > 792feba1a6f6
	I0805 16:37:37.783972    5521 command_runner.go:130] > 1fdd85b796ab
	I0805 16:37:37.783977    5521 command_runner.go:130] > d11865076c64
	I0805 16:37:37.784750    5521 command_runner.go:130] > 608878b33f35
	I0805 16:37:37.784758    5521 command_runner.go:130] > c86e04eb7823
	I0805 16:37:37.784761    5521 command_runner.go:130] > 55a20063845e
	I0805 16:37:37.784893    5521 command_runner.go:130] > b58900db5299
	I0805 16:37:37.784898    5521 command_runner.go:130] > 569788c2699f
	I0805 16:37:37.785811    5521 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 16:37:37.798972    5521 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 16:37:37.807138    5521 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0805 16:37:37.807150    5521 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0805 16:37:37.807156    5521 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0805 16:37:37.807162    5521 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 16:37:37.807183    5521 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 16:37:37.807189    5521 kubeadm.go:157] found existing configuration files:
	
	I0805 16:37:37.807236    5521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 16:37:37.815004    5521 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 16:37:37.815022    5521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 16:37:37.815068    5521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 16:37:37.823210    5521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 16:37:37.831025    5521 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 16:37:37.831041    5521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 16:37:37.831080    5521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 16:37:37.839362    5521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 16:37:37.847024    5521 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 16:37:37.847043    5521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 16:37:37.847077    5521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 16:37:37.855156    5521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 16:37:37.862975    5521 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 16:37:37.862994    5521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 16:37:37.863026    5521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 16:37:37.871334    5521 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 16:37:37.879543    5521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:37:37.943566    5521 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 16:37:37.943663    5521 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0805 16:37:37.943824    5521 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0805 16:37:37.943956    5521 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 16:37:37.944158    5521 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0805 16:37:37.944374    5521 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0805 16:37:37.944697    5521 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0805 16:37:37.944812    5521 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0805 16:37:37.945011    5521 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0805 16:37:37.945077    5521 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 16:37:37.945285    5521 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 16:37:37.946228    5521 command_runner.go:130] > [certs] Using the existing "sa" key
	I0805 16:37:37.946304    5521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:37:39.167358    5521 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 16:37:39.167371    5521 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 16:37:39.167376    5521 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 16:37:39.167380    5521 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 16:37:39.167385    5521 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 16:37:39.167390    5521 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 16:37:39.167425    5521 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.221104057s)
	I0805 16:37:39.167438    5521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:37:39.219662    5521 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 16:37:39.220354    5521 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 16:37:39.220389    5521 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0805 16:37:39.339247    5521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:37:39.389550    5521 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 16:37:39.389565    5521 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 16:37:39.391233    5521 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 16:37:39.391757    5521 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 16:37:39.393094    5521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:37:39.451609    5521 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 16:37:39.461516    5521 api_server.go:52] waiting for apiserver process to appear ...
	I0805 16:37:39.461580    5521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:37:39.963685    5521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:37:40.462977    5521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:37:40.475006    5521 command_runner.go:130] > 1713
	I0805 16:37:40.475163    5521 api_server.go:72] duration metric: took 1.013654502s to wait for apiserver process to appear ...
	I0805 16:37:40.475173    5521 api_server.go:88] waiting for apiserver healthz status ...
	I0805 16:37:40.475189    5521 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:37:42.515953    5521 api_server.go:279] https://192.169.0.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0805 16:37:42.515968    5521 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0805 16:37:42.515976    5521 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:37:42.561960    5521 api_server.go:279] https://192.169.0.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 16:37:42.561978    5521 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 16:37:42.975764    5521 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:37:42.980706    5521 api_server.go:279] https://192.169.0.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 16:37:42.980725    5521 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 16:37:43.476837    5521 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:37:43.480708    5521 api_server.go:279] https://192.169.0.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 16:37:43.480721    5521 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 16:37:43.976652    5521 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:37:43.982020    5521 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0805 16:37:43.982084    5521 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I0805 16:37:43.982089    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:43.982096    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:43.982100    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:43.991478    5521 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0805 16:37:43.991491    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:43.991496    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:43.991499    5521 round_trippers.go:580]     Content-Length: 263
	I0805 16:37:43.991501    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:44 GMT
	I0805 16:37:43.991503    5521 round_trippers.go:580]     Audit-Id: c8ad866d-278d-4a88-b577-2337c27f176f
	I0805 16:37:43.991506    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:43.991508    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:43.991511    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:43.991536    5521 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0805 16:37:43.991580    5521 api_server.go:141] control plane version: v1.30.3
	I0805 16:37:43.991595    5521 api_server.go:131] duration metric: took 3.5164126s to wait for apiserver health ...
	I0805 16:37:43.991603    5521 cni.go:84] Creating CNI manager for ""
	I0805 16:37:43.991607    5521 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0805 16:37:44.014799    5521 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0805 16:37:44.035887    5521 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0805 16:37:44.053905    5521 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0805 16:37:44.053923    5521 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0805 16:37:44.053930    5521 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0805 16:37:44.053942    5521 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0805 16:37:44.053946    5521 command_runner.go:130] > Access: 2024-08-05 23:37:30.300677873 +0000
	I0805 16:37:44.053950    5521 command_runner.go:130] > Modify: 2024-07-29 16:10:03.000000000 +0000
	I0805 16:37:44.053955    5521 command_runner.go:130] > Change: 2024-08-05 23:37:28.153646920 +0000
	I0805 16:37:44.053958    5521 command_runner.go:130] >  Birth: -
	I0805 16:37:44.054010    5521 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0805 16:37:44.054018    5521 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0805 16:37:44.078089    5521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0805 16:37:44.397453    5521 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0805 16:37:44.418847    5521 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0805 16:37:44.539954    5521 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0805 16:37:44.626597    5521 command_runner.go:130] > daemonset.apps/kindnet configured
	I0805 16:37:44.629867    5521 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 16:37:44.629936    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:37:44.629941    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.629947    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.629953    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.636693    5521 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0805 16:37:44.636713    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.636721    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:44 GMT
	I0805 16:37:44.636727    5521 round_trippers.go:580]     Audit-Id: 06b7f684-2b8a-4634-9922-7ad84cb7e6e5
	I0805 16:37:44.636731    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.636737    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.636741    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.636746    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.638935    5521 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1387"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1383","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 73649 chars]
	I0805 16:37:44.641759    5521 system_pods.go:59] 10 kube-system pods found
	I0805 16:37:44.641784    5521 system_pods.go:61] "coredns-7db6d8ff4d-fqtll" [4d8af129-475b-4185-8b0d-cbda67812964] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0805 16:37:44.641790    5521 system_pods.go:61] "etcd-multinode-985000" [8d7ca2d9-8c7b-41b9-a199-de6449107471] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0805 16:37:44.641795    5521 system_pods.go:61] "kindnet-5kfjr" [d68d8211-58f0-4a8f-904a-c6f9f530d58d] Running
	I0805 16:37:44.641799    5521 system_pods.go:61] "kindnet-tvtvg" [7dd4afe7-2a17-4298-823b-9955e43cfdb2] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0805 16:37:44.641804    5521 system_pods.go:61] "kube-apiserver-multinode-985000" [9be3378a-5fab-4907-baad-507918e714e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0805 16:37:44.641808    5521 system_pods.go:61] "kube-controller-manager-multinode-985000" [4ad64361-65de-4b0b-b2a3-07df18c2e603] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0805 16:37:44.641814    5521 system_pods.go:61] "kube-proxy-fwgw7" [3fb72e39-699d-4123-ae5e-e314a191d904] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0805 16:37:44.641818    5521 system_pods.go:61] "kube-proxy-s65dd" [25cd7fe5-8af2-4869-be11-1eb8c5a7ec01] Running
	I0805 16:37:44.641842    5521 system_pods.go:61] "kube-scheduler-multinode-985000" [5e23b1b7-e45d-4b43-831c-aa835c5e536d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0805 16:37:44.641847    5521 system_pods.go:61] "storage-provisioner" [72ec8458-5c62-43eb-9120-0146e6ccaf8f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0805 16:37:44.641852    5521 system_pods.go:74] duration metric: took 11.975799ms to wait for pod list to return data ...
	I0805 16:37:44.641861    5521 node_conditions.go:102] verifying NodePressure condition ...
	I0805 16:37:44.641901    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0805 16:37:44.641906    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.641911    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.641915    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.647494    5521 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0805 16:37:44.647507    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.647513    5521 round_trippers.go:580]     Audit-Id: 51276e8a-8d41-468a-8372-932c99dbe3e8
	I0805 16:37:44.647516    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.647518    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.647539    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.647544    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.647547    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:44 GMT
	I0805 16:37:44.647674    5521 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1388"},"items":[{"metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10158 chars]
	I0805 16:37:44.648158    5521 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:37:44.648172    5521 node_conditions.go:123] node cpu capacity is 2
	I0805 16:37:44.648182    5521 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:37:44.648186    5521 node_conditions.go:123] node cpu capacity is 2
	I0805 16:37:44.648190    5521 node_conditions.go:105] duration metric: took 6.325811ms to run NodePressure ...
	I0805 16:37:44.648205    5521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:37:44.761435    5521 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0805 16:37:44.914201    5521 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0805 16:37:44.915254    5521 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0805 16:37:44.915318    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0805 16:37:44.915324    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.915331    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.915334    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.917615    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:44.917630    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.917640    5521 round_trippers.go:580]     Audit-Id: 84aaee6c-4475-49f2-8185-30cc2c755e1c
	I0805 16:37:44.917647    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.917651    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.917654    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.917657    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.917660    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:44.918012    5521 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1392"},"items":[{"metadata":{"name":"etcd-multinode-985000","namespace":"kube-system","uid":"8d7ca2d9-8c7b-41b9-a199-de6449107471","resourceVersion":"1380","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.mirror":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.seen":"2024-08-05T23:21:06.366030299Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 30917 chars]
	I0805 16:37:44.918731    5521 kubeadm.go:739] kubelet initialised
	I0805 16:37:44.918740    5521 kubeadm.go:740] duration metric: took 3.47538ms waiting for restarted kubelet to initialise ...
	I0805 16:37:44.918747    5521 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:37:44.918798    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:37:44.918804    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.918810    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.918815    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.920859    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:44.920866    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.920871    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.920873    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.920876    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.920878    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.920880    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:44.920883    5521 round_trippers.go:580]     Audit-Id: 51e54f33-9547-4470-b9ba-c080f1387d56
	I0805 16:37:44.921402    5521 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1392"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1383","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 73056 chars]
	I0805 16:37:44.922957    5521 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:44.922999    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:37:44.923004    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.923008    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.923011    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.924336    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:44.924346    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.924352    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.924355    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:44.924361    5521 round_trippers.go:580]     Audit-Id: e46b48bf-5949-4a1a-88ca-0532f6b9c8c3
	I0805 16:37:44.924364    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.924366    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.924368    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.924440    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1383","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0805 16:37:44.924683    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:44.924690    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.924696    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.924702    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.925980    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:44.925990    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.925998    5521 round_trippers.go:580]     Audit-Id: 28537896-265f-4611-9cfa-95ab32a9f5dc
	I0805 16:37:44.926004    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.926014    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.926018    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.926020    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.926023    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:44.926150    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:44.926329    5521 pod_ready.go:97] node "multinode-985000" hosting pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:44.926339    5521 pod_ready.go:81] duration metric: took 3.373593ms for pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace to be "Ready" ...
	E0805 16:37:44.926345    5521 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-985000" hosting pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:44.926352    5521 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:44.926380    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-985000
	I0805 16:37:44.926385    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.926390    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.926394    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.927346    5521 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:37:44.927354    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.927359    5521 round_trippers.go:580]     Audit-Id: 156a7215-933a-4e99-a1ed-5cbaef6005e2
	I0805 16:37:44.927362    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.927366    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.927371    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.927376    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.927381    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:44.927503    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-985000","namespace":"kube-system","uid":"8d7ca2d9-8c7b-41b9-a199-de6449107471","resourceVersion":"1380","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.mirror":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.seen":"2024-08-05T23:21:06.366030299Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0805 16:37:44.927709    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:44.927716    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.927722    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.927726    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.928738    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:44.928746    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.928753    5521 round_trippers.go:580]     Audit-Id: d454e0d3-91a1-437f-9641-9eb40301fb8f
	I0805 16:37:44.928758    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.928762    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.928767    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.928790    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.928796    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:44.928901    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:44.929068    5521 pod_ready.go:97] node "multinode-985000" hosting pod "etcd-multinode-985000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:44.929083    5521 pod_ready.go:81] duration metric: took 2.726167ms for pod "etcd-multinode-985000" in "kube-system" namespace to be "Ready" ...
	E0805 16:37:44.929089    5521 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-985000" hosting pod "etcd-multinode-985000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:44.929115    5521 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:44.929157    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-985000
	I0805 16:37:44.929163    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.929168    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.929172    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.930121    5521 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:37:44.930130    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.930134    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.930137    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.930139    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:44.930142    5521 round_trippers.go:580]     Audit-Id: 04a0388e-012b-4775-93ee-012b587c4ce5
	I0805 16:37:44.930153    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.930157    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.930304    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-985000","namespace":"kube-system","uid":"9be3378a-5fab-4907-baad-507918e714e4","resourceVersion":"1377","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"5908531d711118eab279d6b15448dc42","kubernetes.io/config.mirror":"5908531d711118eab279d6b15448dc42","kubernetes.io/config.seen":"2024-08-05T23:21:06.366030949Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8136 chars]
	I0805 16:37:44.930549    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:44.930558    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.930562    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.930567    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.931628    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:44.931636    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.931641    5521 round_trippers.go:580]     Audit-Id: 72e9cf52-6af7-45fd-a39e-e10ac17a459d
	I0805 16:37:44.931646    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.931652    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.931657    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.931660    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.931663    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:44.931772    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:44.931949    5521 pod_ready.go:97] node "multinode-985000" hosting pod "kube-apiserver-multinode-985000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:44.931958    5521 pod_ready.go:81] duration metric: took 2.833903ms for pod "kube-apiserver-multinode-985000" in "kube-system" namespace to be "Ready" ...
	E0805 16:37:44.931964    5521 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-985000" hosting pod "kube-apiserver-multinode-985000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:44.931970    5521 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:44.931996    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-985000
	I0805 16:37:44.932000    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.932006    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.932009    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.933363    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:44.933370    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.933375    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.933379    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.933383    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.933389    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.933392    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:44.933395    5521 round_trippers.go:580]     Audit-Id: 993e7085-2a06-4126-8cc5-0d75a41d047f
	I0805 16:37:44.933659    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-985000","namespace":"kube-system","uid":"4ad64361-65de-4b0b-b2a3-07df18c2e603","resourceVersion":"1378","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e41fb21b40cd2f3bd83b000891f6569","kubernetes.io/config.mirror":"8e41fb21b40cd2f3bd83b000891f6569","kubernetes.io/config.seen":"2024-08-05T23:21:06.366027130Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7727 chars]
	I0805 16:37:45.030087    5521 request.go:629] Waited for 96.18446ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:45.030215    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:45.030223    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:45.030234    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:45.030255    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:45.032395    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:45.032407    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:45.032414    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:45.032418    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:45.032423    5521 round_trippers.go:580]     Audit-Id: fd76f05c-aa0d-49d6-bc15-f6320e076edc
	I0805 16:37:45.032426    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:45.032428    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:45.032432    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:45.032710    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:45.032917    5521 pod_ready.go:97] node "multinode-985000" hosting pod "kube-controller-manager-multinode-985000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:45.032927    5521 pod_ready.go:81] duration metric: took 100.952173ms for pod "kube-controller-manager-multinode-985000" in "kube-system" namespace to be "Ready" ...
	E0805 16:37:45.032933    5521 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-985000" hosting pod "kube-controller-manager-multinode-985000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:45.032940    5521 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fwgw7" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:45.231074    5521 request.go:629] Waited for 198.067218ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwgw7
	I0805 16:37:45.231166    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwgw7
	I0805 16:37:45.231251    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:45.231259    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:45.231265    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:45.233956    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:45.233970    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:45.233977    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:45.234001    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:45.234024    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:45.234036    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:45.234040    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:45.234045    5521 round_trippers.go:580]     Audit-Id: a628a40a-acc3-4a40-8f85-01be7202c746
	I0805 16:37:45.234163    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fwgw7","generateName":"kube-proxy-","namespace":"kube-system","uid":"3fb72e39-699d-4123-ae5e-e314a191d904","resourceVersion":"1388","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8b6258e6-7b31-4600-b32b-4a269867c123","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8b6258e6-7b31-4600-b32b-4a269867c123\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0805 16:37:45.430145    5521 request.go:629] Waited for 195.640146ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:45.430221    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:45.430232    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:45.430243    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:45.430253    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:45.432534    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:45.432543    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:45.432549    5521 round_trippers.go:580]     Audit-Id: b3c72e32-7485-434a-9741-e61d4dbf854b
	I0805 16:37:45.432551    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:45.432554    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:45.432557    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:45.432560    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:45.432563    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:45.432975    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:45.433185    5521 pod_ready.go:97] node "multinode-985000" hosting pod "kube-proxy-fwgw7" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:45.433197    5521 pod_ready.go:81] duration metric: took 400.252263ms for pod "kube-proxy-fwgw7" in "kube-system" namespace to be "Ready" ...
	E0805 16:37:45.433203    5521 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-985000" hosting pod "kube-proxy-fwgw7" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:45.433211    5521 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-s65dd" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:45.632072    5521 request.go:629] Waited for 198.802376ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s65dd
	I0805 16:37:45.632244    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s65dd
	I0805 16:37:45.632255    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:45.632266    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:45.632272    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:45.635053    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:45.635075    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:45.635085    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:45.635094    5521 round_trippers.go:580]     Audit-Id: 57426407-9d2e-4f47-a704-559027932b6b
	I0805 16:37:45.635098    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:45.635145    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:45.635163    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:45.635171    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:45.635354    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s65dd","generateName":"kube-proxy-","namespace":"kube-system","uid":"25cd7fe5-8af2-4869-be11-1eb8c5a7ec01","resourceVersion":"1280","creationTimestamp":"2024-08-05T23:34:49Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8b6258e6-7b31-4600-b32b-4a269867c123","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:34:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8b6258e6-7b31-4600-b32b-4a269867c123\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0805 16:37:45.831233    5521 request.go:629] Waited for 195.519063ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000-m03
	I0805 16:37:45.831411    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000-m03
	I0805 16:37:45.831422    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:45.831433    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:45.831439    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:45.834136    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:45.834155    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:45.834163    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:45.834183    5521 round_trippers.go:580]     Audit-Id: 27e71a24-1a24-4f27-b263-1184e4e136ef
	I0805 16:37:45.834194    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:45.834220    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:45.834227    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:45.834231    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:45.834346    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000-m03","uid":"9699bc94-d62c-4219-9310-93c890f4d182","resourceVersion":"1310","creationTimestamp":"2024-08-05T23:35:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_05T16_35_55_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:35:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3811 chars]
	I0805 16:37:45.834594    5521 pod_ready.go:92] pod "kube-proxy-s65dd" in "kube-system" namespace has status "Ready":"True"
	I0805 16:37:45.834607    5521 pod_ready.go:81] duration metric: took 401.389356ms for pod "kube-proxy-s65dd" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:45.834615    5521 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:46.030012    5521 request.go:629] Waited for 195.347838ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-985000
	I0805 16:37:46.030118    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-985000
	I0805 16:37:46.030282    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:46.030295    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:46.030302    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:46.033255    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:46.033269    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:46.033277    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:46.033282    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:46 GMT
	I0805 16:37:46.033295    5521 round_trippers.go:580]     Audit-Id: 5581d0b0-634a-4879-93db-f12183f9c6d1
	I0805 16:37:46.033299    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:46.033303    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:46.033307    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:46.033383    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-985000","namespace":"kube-system","uid":"5e23b1b7-e45d-4b43-831c-aa835c5e536d","resourceVersion":"1379","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d110ae14602908970c81c0d8a5c21147","kubernetes.io/config.mirror":"d110ae14602908970c81c0d8a5c21147","kubernetes.io/config.seen":"2024-08-05T23:21:06.366029633Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5439 chars]
	I0805 16:37:46.231588    5521 request.go:629] Waited for 197.896286ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:46.231711    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:46.231722    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:46.231734    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:46.231741    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:46.234296    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:46.234309    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:46.234327    5521 round_trippers.go:580]     Audit-Id: ed41d168-df4f-4577-a59b-11a4695f1e4d
	I0805 16:37:46.234334    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:46.234344    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:46.234348    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:46.234352    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:46.234357    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:46 GMT
	I0805 16:37:46.234726    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:46.235010    5521 pod_ready.go:97] node "multinode-985000" hosting pod "kube-scheduler-multinode-985000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:46.235036    5521 pod_ready.go:81] duration metric: took 400.401386ms for pod "kube-scheduler-multinode-985000" in "kube-system" namespace to be "Ready" ...
	E0805 16:37:46.235046    5521 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-985000" hosting pod "kube-scheduler-multinode-985000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:46.235053    5521 pod_ready.go:38] duration metric: took 1.316290856s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:37:46.235072    5521 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 16:37:46.244782    5521 command_runner.go:130] > -16
	I0805 16:37:46.244799    5521 ops.go:34] apiserver oom_adj: -16
	I0805 16:37:46.244803    5521 kubeadm.go:597] duration metric: took 8.509016692s to restartPrimaryControlPlane
	I0805 16:37:46.244808    5521 kubeadm.go:394] duration metric: took 8.531546295s to StartCluster
	I0805 16:37:46.244817    5521 settings.go:142] acquiring lock: {Name:mk564a817a54ecf2aef16a4d2309e85208c0231f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:37:46.244907    5521 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:37:46.245297    5521 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/kubeconfig: {Name:mk2a0d8b4d330b3c26432fc65d015ddf98a9cc93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:37:46.245581    5521 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:37:46.245620    5521 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 16:37:46.245737    5521 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:37:46.265883    5521 out.go:177] * Verifying Kubernetes components...
	I0805 16:37:46.287681    5521 out.go:177] * Enabled addons: 
	I0805 16:37:46.308720    5521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:37:46.329784    5521 addons.go:510] duration metric: took 84.170663ms for enable addons: enabled=[]
	I0805 16:37:46.445431    5521 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:37:46.455908    5521 node_ready.go:35] waiting up to 6m0s for node "multinode-985000" to be "Ready" ...
	I0805 16:37:46.455963    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:46.455968    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:46.455974    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:46.455977    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:46.457387    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:46.457397    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:46.457405    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:46.457409    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:46 GMT
	I0805 16:37:46.457413    5521 round_trippers.go:580]     Audit-Id: bd4eda68-4863-49e7-bbfb-7ea21cb5ada5
	I0805 16:37:46.457415    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:46.457419    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:46.457421    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:46.457522    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:46.956358    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:46.956384    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:46.956396    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:46.956402    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:46.958818    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:46.958832    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:46.958842    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:46.958847    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:47 GMT
	I0805 16:37:46.958852    5521 round_trippers.go:580]     Audit-Id: b4463266-7add-4cc7-bedc-006651384d80
	I0805 16:37:46.958856    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:46.958860    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:46.958865    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:46.959158    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:47.456173    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:47.456189    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:47.456196    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:47.456199    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:47.457836    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:47.457847    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:47.457853    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:47 GMT
	I0805 16:37:47.457855    5521 round_trippers.go:580]     Audit-Id: b5690d8d-ba4d-4e8f-b3e4-326d910d1169
	I0805 16:37:47.457859    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:47.457863    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:47.457865    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:47.457868    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:47.458059    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:47.957596    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:47.957622    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:47.957635    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:47.957747    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:47.960401    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:47.960416    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:47.960423    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:48 GMT
	I0805 16:37:47.960427    5521 round_trippers.go:580]     Audit-Id: 02db3cf8-0261-4eb0-999f-e3bddfad9106
	I0805 16:37:47.960432    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:47.960436    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:47.960442    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:47.960446    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:47.960593    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:48.456064    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:48.456080    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:48.456087    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:48.456090    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:48.457742    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:48.457753    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:48.457758    5521 round_trippers.go:580]     Audit-Id: 70dbc308-f0bd-455d-8c1c-5afbe89a93d9
	I0805 16:37:48.457762    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:48.457764    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:48.457768    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:48.457772    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:48.457775    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:48 GMT
	I0805 16:37:48.457993    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:48.458188    5521 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:37:48.956783    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:48.956808    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:48.956843    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:48.956864    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:48.959167    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:48.959183    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:48.959193    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:48.959202    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:48.959208    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:48.959213    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:49 GMT
	I0805 16:37:48.959218    5521 round_trippers.go:580]     Audit-Id: 8fc7039f-2874-4170-a425-4689f2a4108b
	I0805 16:37:48.959223    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:48.959444    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:49.456474    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:49.456499    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:49.456511    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:49.456519    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:49.460713    5521 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 16:37:49.460739    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:49.460750    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:49.460761    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:49.460768    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:49 GMT
	I0805 16:37:49.460771    5521 round_trippers.go:580]     Audit-Id: ca04ca0c-3f72-4aff-8e7b-301f719bcbfc
	I0805 16:37:49.460775    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:49.460779    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:49.460857    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:49.957699    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:49.957728    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:49.957740    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:49.957835    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:49.960680    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:49.960698    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:49.960708    5521 round_trippers.go:580]     Audit-Id: 2de612c8-6d27-4ce3-b54a-c8ff3a4a639d
	I0805 16:37:49.960714    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:49.960722    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:49.960727    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:49.960734    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:49.960740    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:50 GMT
	I0805 16:37:49.960897    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:50.457100    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:50.457129    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:50.457142    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:50.457153    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:50.459627    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:50.459642    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:50.459649    5521 round_trippers.go:580]     Audit-Id: fafeb1d7-a055-47c0-988a-6b38c5651dfc
	I0805 16:37:50.459655    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:50.459660    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:50.459663    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:50.459666    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:50.459676    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:50 GMT
	I0805 16:37:50.459741    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:50.459999    5521 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:37:50.956078    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:50.956154    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:50.956163    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:50.956169    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:50.958070    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:50.958082    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:50.958087    5521 round_trippers.go:580]     Audit-Id: 87aa82fe-18d5-4cce-85d4-59e61ce26f17
	I0805 16:37:50.958091    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:50.958094    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:50.958097    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:50.958100    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:50.958102    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:51 GMT
	I0805 16:37:50.958160    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:51.457531    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:51.457557    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:51.457653    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:51.457663    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:51.460369    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:51.460384    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:51.460391    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:51.460396    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:51.460400    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:51.460404    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:51 GMT
	I0805 16:37:51.460431    5521 round_trippers.go:580]     Audit-Id: 9466c051-32fc-4ea5-bd73-ed0e7f687b57
	I0805 16:37:51.460450    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:51.460881    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:51.958224    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:51.958246    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:51.958258    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:51.958263    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:51.960788    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:51.960803    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:51.960811    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:51.960816    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:52 GMT
	I0805 16:37:51.960821    5521 round_trippers.go:580]     Audit-Id: af328a60-8cdc-4dd9-8f48-0c8f8247a6e1
	I0805 16:37:51.960827    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:51.960833    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:51.960836    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:51.960936    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:52.457362    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:52.457389    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:52.457401    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:52.457409    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:52.460067    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:52.460081    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:52.460088    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:52.460093    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:52.460097    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:52.460101    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:52.460104    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:52 GMT
	I0805 16:37:52.460107    5521 round_trippers.go:580]     Audit-Id: 7e825e88-a0c3-4ec8-9784-79cc2ced397e
	I0805 16:37:52.460238    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:52.460481    5521 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:37:52.956862    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:52.956888    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:52.956900    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:52.956906    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:52.959190    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:52.959207    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:52.959222    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:52.959230    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:52.959236    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:52.959241    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:53 GMT
	I0805 16:37:52.959245    5521 round_trippers.go:580]     Audit-Id: 1a9c796b-7598-4e9f-984e-7d71ef0ecc6b
	I0805 16:37:52.959248    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:52.959484    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:53.456240    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:53.456260    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:53.456268    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:53.456272    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:53.458257    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:53.458266    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:53.458272    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:53.458274    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:53 GMT
	I0805 16:37:53.458279    5521 round_trippers.go:580]     Audit-Id: 624a2604-a974-4849-aae7-2e1a5658d567
	I0805 16:37:53.458282    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:53.458287    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:53.458289    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:53.458511    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:53.957417    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:53.957442    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:53.957454    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:53.957460    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:53.960056    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:53.960069    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:53.960076    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:53.960080    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:53.960084    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:53.960088    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:53.960092    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:54 GMT
	I0805 16:37:53.960096    5521 round_trippers.go:580]     Audit-Id: 4faec3b3-a538-4ac5-b5df-a77a30b26579
	I0805 16:37:53.960283    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:54.456804    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:54.456830    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:54.456842    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:54.456850    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:54.459440    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:54.459455    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:54.459462    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:54.459467    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:54 GMT
	I0805 16:37:54.459471    5521 round_trippers.go:580]     Audit-Id: c4315559-7c37-420d-be82-f17839e46d45
	I0805 16:37:54.459475    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:54.459478    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:54.459483    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:54.459541    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:54.957878    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:54.957940    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:54.957948    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:54.957954    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:54.959305    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:54.959315    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:54.959320    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:54.959323    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:55 GMT
	I0805 16:37:54.959326    5521 round_trippers.go:580]     Audit-Id: b65ad43a-738a-45c5-8d88-879d1015f894
	I0805 16:37:54.959328    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:54.959331    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:54.959334    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:54.959389    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1479","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5422 chars]
	I0805 16:37:54.959586    5521 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:37:55.456090    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:55.456116    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:55.456128    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:55.456169    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:55.458752    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:55.458766    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:55.458773    5521 round_trippers.go:580]     Audit-Id: 616d546e-47b3-4c39-a1cf-a7bc7ca58bf7
	I0805 16:37:55.458777    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:55.458782    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:55.458785    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:55.458790    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:55.458793    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:55 GMT
	I0805 16:37:55.459013    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1493","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0805 16:37:55.956768    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:55.956795    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:55.956807    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:55.956815    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:55.959573    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:55.959589    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:55.959598    5521 round_trippers.go:580]     Audit-Id: a21b3b8d-1df5-4728-80b8-f92ed173fb09
	I0805 16:37:55.959602    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:55.959606    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:55.959611    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:55.959615    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:55.959619    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:56 GMT
	I0805 16:37:55.959715    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1493","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0805 16:37:56.456636    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:56.456739    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:56.456753    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:56.456759    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:56.458839    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:56.458851    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:56.458859    5521 round_trippers.go:580]     Audit-Id: b8671d44-80ca-458b-b1a7-50f5ad978f8f
	I0805 16:37:56.458864    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:56.458870    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:56.458874    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:56.458878    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:56.458881    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:56 GMT
	I0805 16:37:56.458982    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1493","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0805 16:37:56.956321    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:56.956347    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:56.956363    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:56.956372    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:56.958919    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:56.958932    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:56.958939    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:56.958944    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:56.958948    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:57 GMT
	I0805 16:37:56.958952    5521 round_trippers.go:580]     Audit-Id: 4f4bb43a-a081-437b-8ed2-cbdb66346756
	I0805 16:37:56.958958    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:56.958961    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:56.959161    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1493","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0805 16:37:57.456800    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:57.456815    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:57.456821    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:57.456825    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:57.458252    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:57.458262    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:57.458266    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:57.458270    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:57 GMT
	I0805 16:37:57.458273    5521 round_trippers.go:580]     Audit-Id: f407e253-302d-4f95-b5a4-ba92b556127a
	I0805 16:37:57.458276    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:57.458278    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:57.458281    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:57.458508    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:57.458703    5521 node_ready.go:49] node "multinode-985000" has status "Ready":"True"
	I0805 16:37:57.458716    5521 node_ready.go:38] duration metric: took 11.002775889s for node "multinode-985000" to be "Ready" ...
	I0805 16:37:57.458723    5521 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:37:57.458755    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:37:57.458761    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:57.458766    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:57.458770    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:57.462079    5521 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:37:57.462091    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:57.462096    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:57.462099    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:57.462102    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:57.462105    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:57.462107    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:57 GMT
	I0805 16:37:57.462111    5521 round_trippers.go:580]     Audit-Id: c20c94e3-f664-43bb-99a2-b2fb3d7f9976
	I0805 16:37:57.463098    5521 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1502"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1383","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 72982 chars]
	I0805 16:37:57.464719    5521 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:57.464766    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:37:57.464771    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:57.464777    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:57.464781    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:57.468609    5521 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:37:57.468622    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:57.468660    5521 round_trippers.go:580]     Audit-Id: 9de6faa5-7a31-44a9-83bf-9ebccfd4a34c
	I0805 16:37:57.468668    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:57.468673    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:57.468677    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:57.468680    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:57.468683    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:57 GMT
	I0805 16:37:57.468940    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1383","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0805 16:37:57.469229    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:57.469236    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:57.469242    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:57.469246    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:57.472498    5521 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:37:57.472509    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:57.472515    5521 round_trippers.go:580]     Audit-Id: 4ff61667-289e-4440-93e2-be7d6d55b721
	I0805 16:37:57.472519    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:57.472522    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:57.472525    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:57.472529    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:57.472531    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:57 GMT
	I0805 16:37:57.472719    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:57.966220    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:37:57.966278    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:57.966296    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:57.966304    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:57.969173    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:57.969187    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:57.969194    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:57.969198    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:57.969202    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:57.969206    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:57.969210    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:58 GMT
	I0805 16:37:57.969214    5521 round_trippers.go:580]     Audit-Id: 9d8c78fc-82fd-4791-b979-ae013d775a53
	I0805 16:37:57.969286    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1383","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0805 16:37:57.969645    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:57.969655    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:57.969662    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:57.969665    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:57.971024    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:57.971035    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:57.971043    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:57.971057    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:57.971067    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:57.971072    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:58 GMT
	I0805 16:37:57.971078    5521 round_trippers.go:580]     Audit-Id: 1384bca3-9b68-4402-b310-399209a4314b
	I0805 16:37:57.971085    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:57.971227    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:58.465939    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:37:58.465967    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:58.465978    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:58.465984    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:58.468758    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:58.468774    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:58.468781    5521 round_trippers.go:580]     Audit-Id: 72df3ada-da8b-4478-8394-8e4440f54d0d
	I0805 16:37:58.468786    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:58.468790    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:58.468794    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:58.468797    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:58.468800    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:58 GMT
	I0805 16:37:58.469261    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1383","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0805 16:37:58.469660    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:58.469669    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:58.469678    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:58.469683    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:58.471092    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:58.471100    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:58.471106    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:58.471110    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:58.471113    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:58 GMT
	I0805 16:37:58.471116    5521 round_trippers.go:580]     Audit-Id: 422803bf-9df2-457f-baab-402da408f3ef
	I0805 16:37:58.471118    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:58.471121    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:58.471275    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:58.966614    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:37:58.966630    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:58.966638    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:58.966643    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:58.968744    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:58.968756    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:58.968764    5521 round_trippers.go:580]     Audit-Id: 3e47d6ce-e3a9-4db9-9176-cf25942d89b9
	I0805 16:37:58.968769    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:58.968773    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:58.968777    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:58.968779    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:58.968782    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:59 GMT
	I0805 16:37:58.969124    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1383","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0805 16:37:58.969515    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:58.969537    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:58.969561    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:58.969565    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:58.970905    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:58.970913    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:58.970918    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:58.970927    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:58.970932    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:58.970935    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:58.970938    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:59 GMT
	I0805 16:37:58.970940    5521 round_trippers.go:580]     Audit-Id: f5155c70-9046-4427-944c-248d4543ab46
	I0805 16:37:58.971032    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:59.465508    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:37:59.465521    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.465527    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.465530    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.468891    5521 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:37:59.468903    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.468908    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:59 GMT
	I0805 16:37:59.468912    5521 round_trippers.go:580]     Audit-Id: 04ed6578-9810-4fac-bbc6-2e95106ea7a2
	I0805 16:37:59.468914    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.468917    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.468920    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.468922    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.469308    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1383","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0805 16:37:59.469589    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:59.469595    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.469601    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.469604    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.471279    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:59.471287    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.471293    5521 round_trippers.go:580]     Audit-Id: 9ef82004-a4d2-4da7-8c13-f62c040183d9
	I0805 16:37:59.471296    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.471299    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.471301    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.471303    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.471306    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:59 GMT
	I0805 16:37:59.471417    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:59.471592    5521 pod_ready.go:102] pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace has status "Ready":"False"
	I0805 16:37:59.965187    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:37:59.965206    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.965218    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.965223    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.967501    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:59.967516    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.967523    5521 round_trippers.go:580]     Audit-Id: 6aa85007-6ee0-4657-8e54-a4bb9dfb34ac
	I0805 16:37:59.967528    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.967548    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.967555    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.967559    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.967563    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:37:59.967804    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1520","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6784 chars]
	I0805 16:37:59.968187    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:59.968194    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.968200    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.968203    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.969359    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:59.969366    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.969373    5521 round_trippers.go:580]     Audit-Id: 47ab49d3-f2d9-42b4-9106-89187d49ce44
	I0805 16:37:59.969376    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.969378    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.969382    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.969385    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.969389    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:37:59.969574    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:59.969740    5521 pod_ready.go:92] pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace has status "Ready":"True"
	I0805 16:37:59.969749    5521 pod_ready.go:81] duration metric: took 2.505012595s for pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:59.969756    5521 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:59.969784    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-985000
	I0805 16:37:59.969788    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.969793    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.969797    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.970714    5521 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:37:59.970723    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.970728    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.970731    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.970733    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.970736    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.970738    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:37:59.970740    5521 round_trippers.go:580]     Audit-Id: e43ae6e7-5ed0-48b6-a0a7-dfb77e057ed0
	I0805 16:37:59.970919    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-985000","namespace":"kube-system","uid":"8d7ca2d9-8c7b-41b9-a199-de6449107471","resourceVersion":"1506","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.mirror":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.seen":"2024-08-05T23:21:06.366030299Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6358 chars]
	I0805 16:37:59.971134    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:59.971141    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.971147    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.971150    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.972128    5521 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:37:59.972141    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.972148    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.972154    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.972158    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.972160    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:37:59.972163    5521 round_trippers.go:580]     Audit-Id: 5b17c3dc-a0a2-4c0d-aa7a-8999b87e3e64
	I0805 16:37:59.972187    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.972281    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:59.972443    5521 pod_ready.go:92] pod "etcd-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:37:59.972450    5521 pod_ready.go:81] duration metric: took 2.690084ms for pod "etcd-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:59.972459    5521 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:59.972487    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-985000
	I0805 16:37:59.972492    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.972497    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.972500    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.973486    5521 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:37:59.973494    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.973499    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.973504    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:37:59.973508    5521 round_trippers.go:580]     Audit-Id: 5bcb7226-eda8-4823-8b5c-25d9a2496fe7
	I0805 16:37:59.973514    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.973518    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.973522    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.973687    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-985000","namespace":"kube-system","uid":"9be3378a-5fab-4907-baad-507918e714e4","resourceVersion":"1498","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"5908531d711118eab279d6b15448dc42","kubernetes.io/config.mirror":"5908531d711118eab279d6b15448dc42","kubernetes.io/config.seen":"2024-08-05T23:21:06.366030949Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7892 chars]
	I0805 16:37:59.973925    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:59.973931    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.973937    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.973941    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.974960    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:59.974978    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.974986    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.974990    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.974993    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.974996    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:37:59.975000    5521 round_trippers.go:580]     Audit-Id: 9e7c3601-1b94-462b-97ec-1a8afab1df7f
	I0805 16:37:59.975003    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.975129    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:59.975296    5521 pod_ready.go:92] pod "kube-apiserver-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:37:59.975303    5521 pod_ready.go:81] duration metric: took 2.839851ms for pod "kube-apiserver-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:59.975309    5521 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:59.975339    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-985000
	I0805 16:37:59.975343    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.975349    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.975352    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.976422    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:59.976452    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.976458    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.976467    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.976470    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:37:59.976472    5521 round_trippers.go:580]     Audit-Id: 512682ae-f4a9-4641-903b-89cfe7630d58
	I0805 16:37:59.976476    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.976478    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.976584    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-985000","namespace":"kube-system","uid":"4ad64361-65de-4b0b-b2a3-07df18c2e603","resourceVersion":"1494","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e41fb21b40cd2f3bd83b000891f6569","kubernetes.io/config.mirror":"8e41fb21b40cd2f3bd83b000891f6569","kubernetes.io/config.seen":"2024-08-05T23:21:06.366027130Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7465 chars]
	I0805 16:37:59.976808    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:59.976815    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.976820    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.976824    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.977900    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:59.977908    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.977912    5521 round_trippers.go:580]     Audit-Id: 09ba5c21-e357-4918-93b4-ff1a00ece334
	I0805 16:37:59.977916    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.977919    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.977922    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.977925    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.977928    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:37:59.978095    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:59.978252    5521 pod_ready.go:92] pod "kube-controller-manager-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:37:59.978260    5521 pod_ready.go:81] duration metric: took 2.945375ms for pod "kube-controller-manager-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:59.978267    5521 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fwgw7" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:59.978292    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwgw7
	I0805 16:37:59.978297    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.978313    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.978320    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.979354    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:59.979360    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.979364    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:37:59.979367    5521 round_trippers.go:580]     Audit-Id: d6e77621-e9d2-486b-8cc4-49ab45a5f053
	I0805 16:37:59.979373    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.979378    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.979382    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.979386    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.979584    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fwgw7","generateName":"kube-proxy-","namespace":"kube-system","uid":"3fb72e39-699d-4123-ae5e-e314a191d904","resourceVersion":"1509","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8b6258e6-7b31-4600-b32b-4a269867c123","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8b6258e6-7b31-4600-b32b-4a269867c123\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0805 16:37:59.979798    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:59.979805    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.979810    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.979815    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.980814    5521 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:37:59.980822    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.980829    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.980835    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.980839    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:37:59.980842    5521 round_trippers.go:580]     Audit-Id: bf9dc5db-49ef-4e93-a9ad-d8ea6d952b22
	I0805 16:37:59.980845    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.980847    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.980963    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:59.981119    5521 pod_ready.go:92] pod "kube-proxy-fwgw7" in "kube-system" namespace has status "Ready":"True"
	I0805 16:37:59.981126    5521 pod_ready.go:81] duration metric: took 2.853579ms for pod "kube-proxy-fwgw7" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:59.981131    5521 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s65dd" in "kube-system" namespace to be "Ready" ...
	I0805 16:38:00.165697    5521 request.go:629] Waited for 184.4763ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s65dd
	I0805 16:38:00.165754    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s65dd
	I0805 16:38:00.165763    5521 round_trippers.go:469] Request Headers:
	I0805 16:38:00.165776    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:38:00.165784    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:38:00.168520    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:38:00.168535    5521 round_trippers.go:577] Response Headers:
	I0805 16:38:00.168543    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:38:00.168547    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:38:00.168552    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:38:00.168556    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:38:00.168559    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:38:00.168564    5521 round_trippers.go:580]     Audit-Id: cb996198-c69f-41f3-9883-c0b1d86c0ef8
	I0805 16:38:00.168681    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s65dd","generateName":"kube-proxy-","namespace":"kube-system","uid":"25cd7fe5-8af2-4869-be11-1eb8c5a7ec01","resourceVersion":"1280","creationTimestamp":"2024-08-05T23:34:49Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8b6258e6-7b31-4600-b32b-4a269867c123","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:34:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8b6258e6-7b31-4600-b32b-4a269867c123\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0805 16:38:00.366684    5521 request.go:629] Waited for 197.656042ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000-m03
	I0805 16:38:00.366816    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000-m03
	I0805 16:38:00.366827    5521 round_trippers.go:469] Request Headers:
	I0805 16:38:00.366839    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:38:00.366845    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:38:00.369434    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:38:00.369449    5521 round_trippers.go:577] Response Headers:
	I0805 16:38:00.369456    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:38:00.369461    5521 round_trippers.go:580]     Audit-Id: 8a485a3a-116c-4fd2-986e-0f95c466f2b6
	I0805 16:38:00.369464    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:38:00.369468    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:38:00.369472    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:38:00.369491    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:38:00.369671    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000-m03","uid":"9699bc94-d62c-4219-9310-93c890f4d182","resourceVersion":"1310","creationTimestamp":"2024-08-05T23:35:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_05T16_35_55_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:35:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3811 chars]
	I0805 16:38:00.369888    5521 pod_ready.go:92] pod "kube-proxy-s65dd" in "kube-system" namespace has status "Ready":"True"
	I0805 16:38:00.369900    5521 pod_ready.go:81] duration metric: took 388.763276ms for pod "kube-proxy-s65dd" in "kube-system" namespace to be "Ready" ...
	I0805 16:38:00.369909    5521 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:38:00.565911    5521 request.go:629] Waited for 195.966473ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-985000
	I0805 16:38:00.566005    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-985000
	I0805 16:38:00.566010    5521 round_trippers.go:469] Request Headers:
	I0805 16:38:00.566016    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:38:00.566021    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:38:00.567727    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:38:00.567736    5521 round_trippers.go:577] Response Headers:
	I0805 16:38:00.567741    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:38:00.567744    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:38:00.567746    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:38:00.567750    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:38:00.567753    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:38:00.567756    5521 round_trippers.go:580]     Audit-Id: e82326e5-6b6c-4bbe-9e4b-0ddab6f947e6
	I0805 16:38:00.567921    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-985000","namespace":"kube-system","uid":"5e23b1b7-e45d-4b43-831c-aa835c5e536d","resourceVersion":"1502","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d110ae14602908970c81c0d8a5c21147","kubernetes.io/config.mirror":"d110ae14602908970c81c0d8a5c21147","kubernetes.io/config.seen":"2024-08-05T23:21:06.366029633Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5195 chars]
	I0805 16:38:00.765952    5521 request.go:629] Waited for 197.798951ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:38:00.766012    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:38:00.766024    5521 round_trippers.go:469] Request Headers:
	I0805 16:38:00.766035    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:38:00.766043    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:38:00.768641    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:38:00.768656    5521 round_trippers.go:577] Response Headers:
	I0805 16:38:00.768663    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:38:00.768668    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:38:00.768672    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:38:00.768679    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:38:00.768686    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:38:00.768690    5521 round_trippers.go:580]     Audit-Id: 185ed8df-c8cf-4ff7-8566-ce38bafe88b6
	I0805 16:38:00.768965    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1525","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0805 16:38:00.769214    5521 pod_ready.go:92] pod "kube-scheduler-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:38:00.769227    5521 pod_ready.go:81] duration metric: took 399.310045ms for pod "kube-scheduler-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:38:00.769236    5521 pod_ready.go:38] duration metric: took 3.310501987s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:38:00.769251    5521 api_server.go:52] waiting for apiserver process to appear ...
	I0805 16:38:00.769314    5521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:38:00.780856    5521 command_runner.go:130] > 1713
	I0805 16:38:00.780992    5521 api_server.go:72] duration metric: took 14.535377095s to wait for apiserver process to appear ...
	I0805 16:38:00.781000    5521 api_server.go:88] waiting for apiserver healthz status ...
	I0805 16:38:00.781009    5521 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:38:00.784000    5521 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0805 16:38:00.784029    5521 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I0805 16:38:00.784034    5521 round_trippers.go:469] Request Headers:
	I0805 16:38:00.784041    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:38:00.784045    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:38:00.784553    5521 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:38:00.784561    5521 round_trippers.go:577] Response Headers:
	I0805 16:38:00.784567    5521 round_trippers.go:580]     Content-Length: 263
	I0805 16:38:00.784570    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:38:00.784572    5521 round_trippers.go:580]     Audit-Id: 5f0639a4-edd4-4f06-9ffe-bc3569a1e001
	I0805 16:38:00.784575    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:38:00.784578    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:38:00.784582    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:38:00.784584    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:38:00.784592    5521 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0805 16:38:00.784614    5521 api_server.go:141] control plane version: v1.30.3
	I0805 16:38:00.784621    5521 api_server.go:131] duration metric: took 3.617958ms to wait for apiserver health ...
	I0805 16:38:00.784627    5521 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 16:38:00.965403    5521 request.go:629] Waited for 180.737038ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:38:00.965497    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:38:00.965511    5521 round_trippers.go:469] Request Headers:
	I0805 16:38:00.965523    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:38:00.965530    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:38:00.969409    5521 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:38:00.969427    5521 round_trippers.go:577] Response Headers:
	I0805 16:38:00.969435    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:38:00.969440    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:38:00.969467    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:38:00.969482    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:01 GMT
	I0805 16:38:00.969489    5521 round_trippers.go:580]     Audit-Id: 9df3ad2c-a16e-4582-8dab-0552f9f48e75
	I0805 16:38:00.969493    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:38:00.970371    5521 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1531"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1520","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 72029 chars]
	I0805 16:38:00.971896    5521 system_pods.go:59] 10 kube-system pods found
	I0805 16:38:00.971906    5521 system_pods.go:61] "coredns-7db6d8ff4d-fqtll" [4d8af129-475b-4185-8b0d-cbda67812964] Running
	I0805 16:38:00.971910    5521 system_pods.go:61] "etcd-multinode-985000" [8d7ca2d9-8c7b-41b9-a199-de6449107471] Running
	I0805 16:38:00.971912    5521 system_pods.go:61] "kindnet-5kfjr" [d68d8211-58f0-4a8f-904a-c6f9f530d58d] Running
	I0805 16:38:00.971915    5521 system_pods.go:61] "kindnet-tvtvg" [7dd4afe7-2a17-4298-823b-9955e43cfdb2] Running
	I0805 16:38:00.971917    5521 system_pods.go:61] "kube-apiserver-multinode-985000" [9be3378a-5fab-4907-baad-507918e714e4] Running
	I0805 16:38:00.971920    5521 system_pods.go:61] "kube-controller-manager-multinode-985000" [4ad64361-65de-4b0b-b2a3-07df18c2e603] Running
	I0805 16:38:00.971923    5521 system_pods.go:61] "kube-proxy-fwgw7" [3fb72e39-699d-4123-ae5e-e314a191d904] Running
	I0805 16:38:00.971926    5521 system_pods.go:61] "kube-proxy-s65dd" [25cd7fe5-8af2-4869-be11-1eb8c5a7ec01] Running
	I0805 16:38:00.971929    5521 system_pods.go:61] "kube-scheduler-multinode-985000" [5e23b1b7-e45d-4b43-831c-aa835c5e536d] Running
	I0805 16:38:00.971931    5521 system_pods.go:61] "storage-provisioner" [72ec8458-5c62-43eb-9120-0146e6ccaf8f] Running
	I0805 16:38:00.971935    5521 system_pods.go:74] duration metric: took 187.304764ms to wait for pod list to return data ...
	I0805 16:38:00.971941    5521 default_sa.go:34] waiting for default service account to be created ...
	I0805 16:38:01.166632    5521 request.go:629] Waited for 194.612281ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:38:01.166685    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:38:01.166696    5521 round_trippers.go:469] Request Headers:
	I0805 16:38:01.166710    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:38:01.166717    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:38:01.169824    5521 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:38:01.169846    5521 round_trippers.go:577] Response Headers:
	I0805 16:38:01.169857    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:38:01.169864    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:38:01.169869    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:38:01.169872    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:38:01.169875    5521 round_trippers.go:580]     Content-Length: 262
	I0805 16:38:01.169881    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:01 GMT
	I0805 16:38:01.169885    5521 round_trippers.go:580]     Audit-Id: 596b84b0-d5e1-453f-9c6b-48a083c0f9d5
	I0805 16:38:01.169899    5521 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1531"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b0626468-f73b-4e9b-8270-658495d43f4a","resourceVersion":"337","creationTimestamp":"2024-08-05T23:21:19Z"}}]}
	I0805 16:38:01.170038    5521 default_sa.go:45] found service account: "default"
	I0805 16:38:01.170050    5521 default_sa.go:55] duration metric: took 198.104201ms for default service account to be created ...
	I0805 16:38:01.170061    5521 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 16:38:01.365509    5521 request.go:629] Waited for 195.385608ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:38:01.365661    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:38:01.365673    5521 round_trippers.go:469] Request Headers:
	I0805 16:38:01.365684    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:38:01.365691    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:38:01.369380    5521 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:38:01.369395    5521 round_trippers.go:577] Response Headers:
	I0805 16:38:01.369401    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:38:01.369406    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:01 GMT
	I0805 16:38:01.369410    5521 round_trippers.go:580]     Audit-Id: 61bbab58-2729-4303-914c-2ce9a281d990
	I0805 16:38:01.369414    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:38:01.369419    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:38:01.369423    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:38:01.370558    5521 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1531"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1520","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 72029 chars]
	I0805 16:38:01.372078    5521 system_pods.go:86] 10 kube-system pods found
	I0805 16:38:01.372087    5521 system_pods.go:89] "coredns-7db6d8ff4d-fqtll" [4d8af129-475b-4185-8b0d-cbda67812964] Running
	I0805 16:38:01.372091    5521 system_pods.go:89] "etcd-multinode-985000" [8d7ca2d9-8c7b-41b9-a199-de6449107471] Running
	I0805 16:38:01.372095    5521 system_pods.go:89] "kindnet-5kfjr" [d68d8211-58f0-4a8f-904a-c6f9f530d58d] Running
	I0805 16:38:01.372098    5521 system_pods.go:89] "kindnet-tvtvg" [7dd4afe7-2a17-4298-823b-9955e43cfdb2] Running
	I0805 16:38:01.372101    5521 system_pods.go:89] "kube-apiserver-multinode-985000" [9be3378a-5fab-4907-baad-507918e714e4] Running
	I0805 16:38:01.372104    5521 system_pods.go:89] "kube-controller-manager-multinode-985000" [4ad64361-65de-4b0b-b2a3-07df18c2e603] Running
	I0805 16:38:01.372108    5521 system_pods.go:89] "kube-proxy-fwgw7" [3fb72e39-699d-4123-ae5e-e314a191d904] Running
	I0805 16:38:01.372111    5521 system_pods.go:89] "kube-proxy-s65dd" [25cd7fe5-8af2-4869-be11-1eb8c5a7ec01] Running
	I0805 16:38:01.372114    5521 system_pods.go:89] "kube-scheduler-multinode-985000" [5e23b1b7-e45d-4b43-831c-aa835c5e536d] Running
	I0805 16:38:01.372117    5521 system_pods.go:89] "storage-provisioner" [72ec8458-5c62-43eb-9120-0146e6ccaf8f] Running
	I0805 16:38:01.372121    5521 system_pods.go:126] duration metric: took 202.055662ms to wait for k8s-apps to be running ...
	I0805 16:38:01.372129    5521 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 16:38:01.372178    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:38:01.384196    5521 system_svc.go:56] duration metric: took 12.064518ms WaitForService to wait for kubelet
	I0805 16:38:01.384212    5521 kubeadm.go:582] duration metric: took 15.138595056s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:38:01.384224    5521 node_conditions.go:102] verifying NodePressure condition ...
	I0805 16:38:01.566320    5521 request.go:629] Waited for 182.003764ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I0805 16:38:01.566366    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0805 16:38:01.566373    5521 round_trippers.go:469] Request Headers:
	I0805 16:38:01.566385    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:38:01.566391    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:38:01.569209    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:38:01.569222    5521 round_trippers.go:577] Response Headers:
	I0805 16:38:01.569229    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:38:01.569238    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:01 GMT
	I0805 16:38:01.569244    5521 round_trippers.go:580]     Audit-Id: c16ec0aa-cf96-486e-a79d-d457d64a2789
	I0805 16:38:01.569248    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:38:01.569250    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:38:01.569254    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:38:01.569365    5521 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1531"},"items":[{"metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1525","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10031 chars]
	I0805 16:38:01.569754    5521 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:38:01.569766    5521 node_conditions.go:123] node cpu capacity is 2
	I0805 16:38:01.569774    5521 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:38:01.569781    5521 node_conditions.go:123] node cpu capacity is 2
	I0805 16:38:01.569787    5521 node_conditions.go:105] duration metric: took 185.55857ms to run NodePressure ...
	I0805 16:38:01.569796    5521 start.go:241] waiting for startup goroutines ...
	I0805 16:38:01.569804    5521 start.go:246] waiting for cluster config update ...
	I0805 16:38:01.569812    5521 start.go:255] writing updated cluster config ...
	I0805 16:38:01.590862    5521 out.go:177] 
	I0805 16:38:01.612868    5521 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:38:01.612983    5521 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:38:01.635442    5521 out.go:177] * Starting "multinode-985000-m02" worker node in "multinode-985000" cluster
	I0805 16:38:01.677243    5521 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:38:01.677275    5521 cache.go:56] Caching tarball of preloaded images
	I0805 16:38:01.677441    5521 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:38:01.677459    5521 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:38:01.677582    5521 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:38:01.678499    5521 start.go:360] acquireMachinesLock for multinode-985000-m02: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:38:01.678607    5521 start.go:364] duration metric: took 81.884µs to acquireMachinesLock for "multinode-985000-m02"
	I0805 16:38:01.678635    5521 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:38:01.678643    5521 fix.go:54] fixHost starting: m02
	I0805 16:38:01.679008    5521 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:38:01.679028    5521 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:38:01.688188    5521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53145
	I0805 16:38:01.688589    5521 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:38:01.688918    5521 main.go:141] libmachine: Using API Version  1
	I0805 16:38:01.688930    5521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:38:01.689133    5521 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:38:01.689265    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:38:01.689361    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetState
	I0805 16:38:01.689448    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:38:01.689523    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:38:01.690467    5521 fix.go:112] recreateIfNeeded on multinode-985000-m02: state=Stopped err=<nil>
	I0805 16:38:01.690478    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:38:01.690482    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid 4678 missing from process table
	W0805 16:38:01.690569    5521 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:38:01.711256    5521 out.go:177] * Restarting existing hyperkit VM for "multinode-985000-m02" ...
	I0805 16:38:01.732476    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .Start
	I0805 16:38:01.732792    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:38:01.732823    5521 main.go:141] libmachine: (multinode-985000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid
	I0805 16:38:01.734619    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid 4678 missing from process table
	I0805 16:38:01.734647    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | pid 4678 is in state "Stopped"
	I0805 16:38:01.734664    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid...
	I0805 16:38:01.734965    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | Using UUID ab5b9c9f-9e28-4bc2-8fcd-b98fce011173
	I0805 16:38:01.762464    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | Generated MAC a6:1c:88:9c:44:3
	I0805 16:38:01.762484    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000
	I0805 16:38:01.762607    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a6900)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0805 16:38:01.762638    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a6900)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0805 16:38:01.762681    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/multinode-985000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage,/Users/j
enkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"}
	I0805 16:38:01.762732    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ab5b9c9f-9e28-4bc2-8fcd-b98fce011173 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/multinode-985000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/mult
inode-985000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"
	I0805 16:38:01.762746    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:38:01.764220    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 DEBUG: hyperkit: Pid is 5546
	I0805 16:38:01.764724    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 0
	I0805 16:38:01.764744    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:38:01.764814    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 5546
	I0805 16:38:01.766771    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:38:01.766808    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I0805 16:38:01.766817    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b3b9}
	I0805 16:38:01.766827    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:38:01.766833    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b00c}
	I0805 16:38:01.766840    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | Found match: a6:1c:88:9c:44:3
	I0805 16:38:01.766846    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | IP: 192.169.0.14
	I0805 16:38:01.766898    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetConfigRaw
	I0805 16:38:01.767595    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:38:01.767783    5521 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:38:01.768260    5521 machine.go:94] provisionDockerMachine start ...
	I0805 16:38:01.768271    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:38:01.768389    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:01.768494    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:01.768587    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:01.768704    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:01.768800    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:01.768955    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:01.769112    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:38:01.769120    5521 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 16:38:01.772314    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:38:01.780646    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:38:01.781683    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:38:01.781725    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:38:01.781742    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:38:01.781754    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:38:02.165919    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:02 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:38:02.165934    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:02 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:38:02.281252    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:38:02.281273    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:38:02.281284    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:38:02.281293    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:38:02.282119    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:02 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:38:02.282130    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:02 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:38:07.861454    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:07 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:38:07.861538    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:07 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:38:07.861548    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:07 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:38:07.885114    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:07 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:38:12.833107    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 16:38:12.833122    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:38:12.833275    5521 buildroot.go:166] provisioning hostname "multinode-985000-m02"
	I0805 16:38:12.833287    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:38:12.833379    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:12.833467    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:12.833553    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:12.833648    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:12.833745    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:12.833872    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:12.834012    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:38:12.834021    5521 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-985000-m02 && echo "multinode-985000-m02" | sudo tee /etc/hostname
	I0805 16:38:12.899963    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-985000-m02
	
	I0805 16:38:12.899978    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:12.900133    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:12.900233    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:12.900332    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:12.900419    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:12.900559    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:12.900721    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:38:12.900732    5521 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-985000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-985000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-985000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:38:12.963291    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:38:12.963306    5521 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:38:12.963316    5521 buildroot.go:174] setting up certificates
	I0805 16:38:12.963325    5521 provision.go:84] configureAuth start
	I0805 16:38:12.963332    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:38:12.963463    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:38:12.963563    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:12.963644    5521 provision.go:143] copyHostCerts
	I0805 16:38:12.963672    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:38:12.963719    5521 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:38:12.963724    5521 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:38:12.963846    5521 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:38:12.964058    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:38:12.964088    5521 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:38:12.964093    5521 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:38:12.964171    5521 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:38:12.964327    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:38:12.964357    5521 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:38:12.964362    5521 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:38:12.964431    5521 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:38:12.964609    5521 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.multinode-985000-m02 san=[127.0.0.1 192.169.0.14 localhost minikube multinode-985000-m02]
	I0805 16:38:13.029718    5521 provision.go:177] copyRemoteCerts
	I0805 16:38:13.029767    5521 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:38:13.029782    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:13.029926    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:13.030013    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:13.030100    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:13.030195    5521 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:38:13.063868    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:38:13.063938    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:38:13.083721    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:38:13.083789    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:38:13.103391    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:38:13.103455    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0805 16:38:13.123247    5521 provision.go:87] duration metric: took 159.914588ms to configureAuth
	I0805 16:38:13.123259    5521 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:38:13.123427    5521 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:38:13.123441    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:38:13.123574    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:13.123660    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:13.123737    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:13.123827    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:13.123918    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:13.124026    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:13.124190    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:38:13.124198    5521 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:38:13.182171    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:38:13.182183    5521 buildroot.go:70] root file system type: tmpfs
	I0805 16:38:13.182268    5521 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:38:13.182279    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:13.182405    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:13.182503    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:13.182591    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:13.182683    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:13.182809    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:13.182954    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:38:13.183003    5521 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.13"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:38:13.248138    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.13
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:38:13.248155    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:13.248304    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:13.248405    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:13.248495    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:13.248573    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:13.248699    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:13.248870    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:38:13.248883    5521 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:38:14.774504    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:38:14.774518    5521 machine.go:97] duration metric: took 13.006233682s to provisionDockerMachine
	I0805 16:38:14.774527    5521 start.go:293] postStartSetup for "multinode-985000-m02" (driver="hyperkit")
	I0805 16:38:14.774535    5521 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:38:14.774546    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:38:14.774714    5521 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:38:14.774729    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:14.774827    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:14.774909    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:14.774998    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:14.775085    5521 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:38:14.816544    5521 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:38:14.820061    5521 command_runner.go:130] > NAME=Buildroot
	I0805 16:38:14.820070    5521 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0805 16:38:14.820074    5521 command_runner.go:130] > ID=buildroot
	I0805 16:38:14.820078    5521 command_runner.go:130] > VERSION_ID=2023.02.9
	I0805 16:38:14.820083    5521 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0805 16:38:14.820286    5521 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:38:14.820300    5521 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:38:14.820397    5521 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:38:14.820538    5521 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:38:14.820545    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:38:14.820707    5521 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:38:14.833566    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:38:14.861185    5521 start.go:296] duration metric: took 86.648603ms for postStartSetup
	I0805 16:38:14.861206    5521 fix.go:56] duration metric: took 13.182545662s for fixHost
	I0805 16:38:14.861238    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:14.861375    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:14.861467    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:14.861563    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:14.861652    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:14.861768    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:14.861912    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:38:14.861919    5521 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 16:38:14.917690    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722901094.828326920
	
	I0805 16:38:14.917701    5521 fix.go:216] guest clock: 1722901094.828326920
	I0805 16:38:14.917706    5521 fix.go:229] Guest: 2024-08-05 16:38:14.82832692 -0700 PDT Remote: 2024-08-05 16:38:14.861212 -0700 PDT m=+55.555905067 (delta=-32.88508ms)
	I0805 16:38:14.917716    5521 fix.go:200] guest clock delta is within tolerance: -32.88508ms
	I0805 16:38:14.917719    5521 start.go:83] releasing machines lock for "multinode-985000-m02", held for 13.239083998s
	I0805 16:38:14.917737    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:38:14.917864    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:38:14.938999    5521 out.go:177] * Found network options:
	I0805 16:38:14.996112    5521 out.go:177]   - NO_PROXY=192.169.0.13
	W0805 16:38:15.018259    5521 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:38:15.018300    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:38:15.019232    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:38:15.019568    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:38:15.019685    5521 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:38:15.019730    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	W0805 16:38:15.019879    5521 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:38:15.019923    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:15.019984    5521 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 16:38:15.020001    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:15.020157    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:15.020211    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:15.020380    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:15.020412    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:15.020614    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:15.020625    5521 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:38:15.020777    5521 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:38:15.053501    5521 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0805 16:38:15.053659    5521 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:38:15.053723    5521 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:38:15.098852    5521 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0805 16:38:15.098927    5521 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0805 16:38:15.098945    5521 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:38:15.098953    5521 start.go:495] detecting cgroup driver to use...
	I0805 16:38:15.099023    5521 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:38:15.113615    5521 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0805 16:38:15.113873    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:38:15.122000    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:38:15.130421    5521 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:38:15.130464    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:38:15.138622    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:38:15.146769    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:38:15.154881    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:38:15.162940    5521 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:38:15.171228    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:38:15.179545    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:38:15.187667    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:38:15.196019    5521 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:38:15.203310    5521 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0805 16:38:15.203418    5521 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:38:15.210899    5521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:38:15.315364    5521 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:38:15.333178    5521 start.go:495] detecting cgroup driver to use...
	I0805 16:38:15.333246    5521 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:38:15.351847    5521 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0805 16:38:15.352028    5521 command_runner.go:130] > [Unit]
	I0805 16:38:15.352037    5521 command_runner.go:130] > Description=Docker Application Container Engine
	I0805 16:38:15.352041    5521 command_runner.go:130] > Documentation=https://docs.docker.com
	I0805 16:38:15.352046    5521 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0805 16:38:15.352050    5521 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0805 16:38:15.352057    5521 command_runner.go:130] > StartLimitBurst=3
	I0805 16:38:15.352063    5521 command_runner.go:130] > StartLimitIntervalSec=60
	I0805 16:38:15.352066    5521 command_runner.go:130] > [Service]
	I0805 16:38:15.352070    5521 command_runner.go:130] > Type=notify
	I0805 16:38:15.352078    5521 command_runner.go:130] > Restart=on-failure
	I0805 16:38:15.352084    5521 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13
	I0805 16:38:15.352092    5521 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0805 16:38:15.352102    5521 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0805 16:38:15.352115    5521 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0805 16:38:15.352122    5521 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0805 16:38:15.352128    5521 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0805 16:38:15.352133    5521 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0805 16:38:15.352139    5521 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0805 16:38:15.352148    5521 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0805 16:38:15.352155    5521 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0805 16:38:15.352158    5521 command_runner.go:130] > ExecStart=
	I0805 16:38:15.352169    5521 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0805 16:38:15.352174    5521 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0805 16:38:15.352181    5521 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0805 16:38:15.352187    5521 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0805 16:38:15.352190    5521 command_runner.go:130] > LimitNOFILE=infinity
	I0805 16:38:15.352193    5521 command_runner.go:130] > LimitNPROC=infinity
	I0805 16:38:15.352197    5521 command_runner.go:130] > LimitCORE=infinity
	I0805 16:38:15.352202    5521 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0805 16:38:15.352209    5521 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0805 16:38:15.352215    5521 command_runner.go:130] > TasksMax=infinity
	I0805 16:38:15.352219    5521 command_runner.go:130] > TimeoutStartSec=0
	I0805 16:38:15.352224    5521 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0805 16:38:15.352229    5521 command_runner.go:130] > Delegate=yes
	I0805 16:38:15.352237    5521 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0805 16:38:15.352249    5521 command_runner.go:130] > KillMode=process
	I0805 16:38:15.352253    5521 command_runner.go:130] > [Install]
	I0805 16:38:15.352256    5521 command_runner.go:130] > WantedBy=multi-user.target
	I0805 16:38:15.352438    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:38:15.367477    5521 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:38:15.384493    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:38:15.395662    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:38:15.405888    5521 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:38:15.468063    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:38:15.478558    5521 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:38:15.493596    5521 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0805 16:38:15.493658    5521 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:38:15.496390    5521 command_runner.go:130] > /usr/bin/cri-dockerd
	I0805 16:38:15.496655    5521 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:38:15.503652    5521 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:38:15.519898    5521 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:38:15.619700    5521 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:38:15.722257    5521 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:38:15.722278    5521 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:38:15.735967    5521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:38:15.833114    5521 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:39:16.651467    5521 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0805 16:39:16.651483    5521 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0805 16:39:16.651496    5521 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.818287184s)
	I0805 16:39:16.651563    5521 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0805 16:39:16.661216    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0805 16:39:16.661228    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:13.420146905Z" level=info msg="Starting up"
	I0805 16:39:16.661236    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:13.420872507Z" level=info msg="containerd not running, starting managed containerd"
	I0805 16:39:16.661248    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:13.421358599Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=497
	I0805 16:39:16.661258    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.437602421Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0805 16:39:16.661268    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454632195Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0805 16:39:16.661294    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454680682Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0805 16:39:16.661303    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454724229Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0805 16:39:16.661313    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454738567Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0805 16:39:16.661323    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454771554Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:39:16.661333    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454832124Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:39:16.661358    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455014271Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:39:16.661368    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455053874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0805 16:39:16.661380    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455070229Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:39:16.661390    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455079145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0805 16:39:16.661401    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455109467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:39:16.661411    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455253015Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0805 16:39:16.661426    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.456861169Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:39:16.661438    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.456915956Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:39:16.661496    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457058253Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:39:16.661510    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457101847Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0805 16:39:16.661521    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457151686Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0805 16:39:16.661529    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457193291Z" level=info msg="metadata content store policy set" policy=shared
	I0805 16:39:16.661537    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457536850Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0805 16:39:16.661546    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457637715Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0805 16:39:16.661555    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457694331Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0805 16:39:16.661564    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457728855Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0805 16:39:16.661573    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457761160Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0805 16:39:16.661582    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457827388Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0805 16:39:16.661591    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458029068Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0805 16:39:16.661599    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458106036Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0805 16:39:16.661608    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458141669Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0805 16:39:16.661618    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458173056Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0805 16:39:16.661628    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458207694Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0805 16:39:16.661638    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458242036Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0805 16:39:16.661647    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458286329Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0805 16:39:16.661656    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458320625Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0805 16:39:16.661666    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458360911Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0805 16:39:16.661683    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458395522Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0805 16:39:16.661748    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458435461Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0805 16:39:16.661759    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458468994Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0805 16:39:16.661770    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458507655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661780    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458543528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661789    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458575409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661797    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458606090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661806    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458640753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661816    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458672527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661825    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458702141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661833    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458786564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661843    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458833470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661851    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458867942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661860    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458897905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661869    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458927275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661878    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458956835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661891    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458999344Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0805 16:39:16.661900    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459042185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661909    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459076838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661918    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459117163Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0805 16:39:16.661928    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459171448Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0805 16:39:16.661939    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459206426Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0805 16:39:16.661948    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459236530Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0805 16:39:16.662025    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459266816Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0805 16:39:16.662039    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459297300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.662049    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459333043Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0805 16:39:16.662058    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459365111Z" level=info msg="NRI interface is disabled by configuration."
	I0805 16:39:16.662068    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459520257Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0805 16:39:16.662076    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459589097Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0805 16:39:16.662085    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459647415Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0805 16:39:16.662098    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459731249Z" level=info msg="containerd successfully booted in 0.022632s"
	I0805 16:39:16.662106    5521 command_runner.go:130] > Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.442507541Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0805 16:39:16.662113    5521 command_runner.go:130] > Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.446047233Z" level=info msg="Loading containers: start."
	I0805 16:39:16.662134    5521 command_runner.go:130] > Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.533905829Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0805 16:39:16.662147    5521 command_runner.go:130] > Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.600469950Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0805 16:39:16.662155    5521 command_runner.go:130] > Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.643991126Z" level=info msg="Loading containers: done."
	I0805 16:39:16.662165    5521 command_runner.go:130] > Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.660081921Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	I0805 16:39:16.662172    5521 command_runner.go:130] > Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.660224037Z" level=info msg="Daemon has completed initialization"
	I0805 16:39:16.662182    5521 command_runner.go:130] > Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.679152512Z" level=info msg="API listen on /var/run/docker.sock"
	I0805 16:39:16.662188    5521 command_runner.go:130] > Aug 05 23:38:14 multinode-985000-m02 systemd[1]: Started Docker Application Container Engine.
	I0805 16:39:16.662195    5521 command_runner.go:130] > Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.679221051Z" level=info msg="API listen on [::]:2376"
	I0805 16:39:16.662203    5521 command_runner.go:130] > Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.785720729Z" level=info msg="Processing signal 'terminated'"
	I0805 16:39:16.662211    5521 command_runner.go:130] > Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.786631200Z" level=info msg="Daemon shutdown complete"
	I0805 16:39:16.662222    5521 command_runner.go:130] > Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.786734889Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0805 16:39:16.662233    5521 command_runner.go:130] > Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.786818951Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	I0805 16:39:16.662243    5521 command_runner.go:130] > Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.786854490Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0805 16:39:16.662276    5521 command_runner.go:130] > Aug 05 23:38:15 multinode-985000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0805 16:39:16.662283    5521 command_runner.go:130] > Aug 05 23:38:16 multinode-985000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0805 16:39:16.662289    5521 command_runner.go:130] > Aug 05 23:38:16 multinode-985000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0805 16:39:16.662295    5521 command_runner.go:130] > Aug 05 23:38:16 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0805 16:39:16.662302    5521 command_runner.go:130] > Aug 05 23:38:16 multinode-985000-m02 dockerd[909]: time="2024-08-05T23:38:16.819558392Z" level=info msg="Starting up"
	I0805 16:39:16.662312    5521 command_runner.go:130] > Aug 05 23:39:16 multinode-985000-m02 dockerd[909]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0805 16:39:16.662323    5521 command_runner.go:130] > Aug 05 23:39:16 multinode-985000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0805 16:39:16.662329    5521 command_runner.go:130] > Aug 05 23:39:16 multinode-985000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0805 16:39:16.662335    5521 command_runner.go:130] > Aug 05 23:39:16 multinode-985000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0805 16:39:16.687918    5521 out.go:177] 
	W0805 16:39:16.708897    5521 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 05 23:38:13 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:38:13 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:13.420146905Z" level=info msg="Starting up"
	Aug 05 23:38:13 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:13.420872507Z" level=info msg="containerd not running, starting managed containerd"
	Aug 05 23:38:13 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:13.421358599Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=497
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.437602421Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454632195Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454680682Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454724229Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454738567Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454771554Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454832124Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455014271Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455053874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455070229Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455079145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455109467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455253015Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.456861169Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.456915956Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457058253Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457101847Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457151686Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457193291Z" level=info msg="metadata content store policy set" policy=shared
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457536850Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457637715Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457694331Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457728855Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457761160Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457827388Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458029068Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458106036Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458141669Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458173056Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458207694Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458242036Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458286329Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458320625Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458360911Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458395522Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458435461Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458468994Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458507655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458543528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458575409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458606090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458640753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458672527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458702141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458786564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458833470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458867942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458897905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458927275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458956835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458999344Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459042185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459076838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459117163Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459171448Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459206426Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459236530Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459266816Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459297300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459333043Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459365111Z" level=info msg="NRI interface is disabled by configuration."
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459520257Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459589097Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459647415Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459731249Z" level=info msg="containerd successfully booted in 0.022632s"
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.442507541Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.446047233Z" level=info msg="Loading containers: start."
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.533905829Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.600469950Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.643991126Z" level=info msg="Loading containers: done."
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.660081921Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.660224037Z" level=info msg="Daemon has completed initialization"
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.679152512Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 05 23:38:14 multinode-985000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.679221051Z" level=info msg="API listen on [::]:2376"
	Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.785720729Z" level=info msg="Processing signal 'terminated'"
	Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.786631200Z" level=info msg="Daemon shutdown complete"
	Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.786734889Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.786818951Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.786854490Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 05 23:38:15 multinode-985000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 05 23:38:16 multinode-985000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 05 23:38:16 multinode-985000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 05 23:38:16 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:38:16 multinode-985000-m02 dockerd[909]: time="2024-08-05T23:38:16.819558392Z" level=info msg="Starting up"
	Aug 05 23:39:16 multinode-985000-m02 dockerd[909]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 05 23:39:16 multinode-985000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 05 23:39:16 multinode-985000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 05 23:39:16 multinode-985000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0805 16:39:16.709036    5521 out.go:239] * 
	W0805 16:39:16.710224    5521 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:39:16.772583    5521 out.go:177] 
	
	
	==> Docker <==
	Aug 05 23:37:59 multinode-985000 dockerd[909]: time="2024-08-05T23:37:59.530647852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:37:59 multinode-985000 dockerd[909]: time="2024-08-05T23:37:59.530659237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:37:59 multinode-985000 dockerd[909]: time="2024-08-05T23:37:59.530721053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:37:59 multinode-985000 dockerd[909]: time="2024-08-05T23:37:59.587753877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:37:59 multinode-985000 dockerd[909]: time="2024-08-05T23:37:59.587813098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:37:59 multinode-985000 dockerd[909]: time="2024-08-05T23:37:59.587868053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:37:59 multinode-985000 dockerd[909]: time="2024-08-05T23:37:59.587933581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:37:59 multinode-985000 cri-dockerd[1158]: time="2024-08-05T23:37:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cd4b2b55e63d667baa0f6c6c9596a80de9a5e7e56f52b4f35c1a9f872b7103a5/resolv.conf as [nameserver 192.169.0.1]"
	Aug 05 23:37:59 multinode-985000 dockerd[909]: time="2024-08-05T23:37:59.809728237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:37:59 multinode-985000 dockerd[909]: time="2024-08-05T23:37:59.809773629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:37:59 multinode-985000 dockerd[909]: time="2024-08-05T23:37:59.809829513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:37:59 multinode-985000 dockerd[909]: time="2024-08-05T23:37:59.809895416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:37:59 multinode-985000 cri-dockerd[1158]: time="2024-08-05T23:37:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/658cceb77ae8c0f75cf82b1523a9419bd5b36531ba34b839ac50b6aefb77d462/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 05 23:37:59 multinode-985000 dockerd[909]: time="2024-08-05T23:37:59.904825743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:37:59 multinode-985000 dockerd[909]: time="2024-08-05T23:37:59.904885148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:37:59 multinode-985000 dockerd[909]: time="2024-08-05T23:37:59.904912065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:37:59 multinode-985000 dockerd[909]: time="2024-08-05T23:37:59.905156720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:38:14 multinode-985000 dockerd[903]: time="2024-08-05T23:38:14.290421548Z" level=info msg="ignoring event" container=0d0f4c86d1e8c797cb0c58d08f505521679191138c65b7051df09ccf4e702a25 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 05 23:38:14 multinode-985000 dockerd[909]: time="2024-08-05T23:38:14.291138494Z" level=info msg="shim disconnected" id=0d0f4c86d1e8c797cb0c58d08f505521679191138c65b7051df09ccf4e702a25 namespace=moby
	Aug 05 23:38:14 multinode-985000 dockerd[909]: time="2024-08-05T23:38:14.291376058Z" level=warning msg="cleaning up after shim disconnected" id=0d0f4c86d1e8c797cb0c58d08f505521679191138c65b7051df09ccf4e702a25 namespace=moby
	Aug 05 23:38:14 multinode-985000 dockerd[909]: time="2024-08-05T23:38:14.291419423Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 05 23:38:27 multinode-985000 dockerd[909]: time="2024-08-05T23:38:27.687033437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:38:27 multinode-985000 dockerd[909]: time="2024-08-05T23:38:27.687615016Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:38:27 multinode-985000 dockerd[909]: time="2024-08-05T23:38:27.687656640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:38:27 multinode-985000 dockerd[909]: time="2024-08-05T23:38:27.687946254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	f0f4bede55f3a       6e38f40d628db                                                                                         51 seconds ago       Running             storage-provisioner       2                   3dbf65ea93f78       storage-provisioner
	fb1f1e1ed4457       8c811b4aec35f                                                                                         About a minute ago   Running             busybox                   1                   658cceb77ae8c       busybox-fc5497c4f-44k5g
	2141742da0666       cbb01a7bd410d                                                                                         About a minute ago   Running             coredns                   1                   cd4b2b55e63d6       coredns-7db6d8ff4d-fqtll
	d5738d55fecd4       917d7814b9b5b                                                                                         About a minute ago   Running             kindnet-cni               1                   0f87877cd7c1a       kindnet-tvtvg
	0d0f4c86d1e8c       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   3dbf65ea93f78       storage-provisioner
	413cda260d217       55bb025d2cfa5                                                                                         About a minute ago   Running             kube-proxy                1                   b802ec8e629da       kube-proxy-fwgw7
	ff391cbc1ee5d       3edc18e7b7672                                                                                         About a minute ago   Running             kube-scheduler            1                   12292d1aa4843       kube-scheduler-multinode-985000
	ee05acb4726f8       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      1                   0b1913061cd3f       etcd-multinode-985000
	92bdde18e9bc2       1f6d574d502f3                                                                                         About a minute ago   Running             kube-apiserver            1                   4f42c6fa501f4       kube-apiserver-multinode-985000
	b348fa62c4a57       76932a3b37d7e                                                                                         About a minute ago   Running             kube-controller-manager   1                   3bf209dcf9a99       kube-controller-manager-multinode-985000
	0cbc162071e51       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   16 minutes ago       Exited              busybox                   0                   abfb33d4f204d       busybox-fc5497c4f-44k5g
	c9365aec33892       cbb01a7bd410d                                                                                         17 minutes ago       Exited              coredns                   0                   35b9ac42edc06       coredns-7db6d8ff4d-fqtll
	724e5cfab0a27       kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3              17 minutes ago       Exited              kindnet-cni               0                   65a1122097f07       kindnet-tvtvg
	d58ca48f9f8b2       55bb025d2cfa5                                                                                         17 minutes ago       Exited              kube-proxy                0                   c91338eb0e138       kube-proxy-fwgw7
	792feba1a6f6b       3edc18e7b7672                                                                                         18 minutes ago       Exited              kube-scheduler            0                   c86e04eb7823b       kube-scheduler-multinode-985000
	1fdd85b796ab3       3861cfcd7c04c                                                                                         18 minutes ago       Exited              etcd                      0                   b58900db52990       etcd-multinode-985000
	d11865076c645       76932a3b37d7e                                                                                         18 minutes ago       Exited              kube-controller-manager   0                   55a20063845e3       kube-controller-manager-multinode-985000
	608878b33f358       1f6d574d502f3                                                                                         18 minutes ago       Exited              kube-apiserver            0                   569788c2699f1       kube-apiserver-multinode-985000
	
	
	==> coredns [2141742da066] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:55096 - 16258 "HINFO IN 3588705990584082194.7089874688342145824. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012628073s
	
	
	==> coredns [c9365aec3389] <==
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57821 - 19682 "HINFO IN 7732396596932693360.4385804994640298901. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.014623104s
	[INFO] 10.244.0.3:44234 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136193s
	[INFO] 10.244.0.3:37423 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.058799401s
	[INFO] 10.244.0.3:57961 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.010090318s
	[INFO] 10.244.0.3:37799 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.012765436s
	[INFO] 10.244.0.3:46499 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000078364s
	[INFO] 10.244.0.3:42436 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011216992s
	[INFO] 10.244.0.3:35880 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000144767s
	[INFO] 10.244.0.3:39224 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104006s
	[INFO] 10.244.0.3:48536 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.013324615s
	[INFO] 10.244.0.3:55841 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000221823s
	[INFO] 10.244.0.3:46712 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000111417s
	[INFO] 10.244.0.3:51982 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099744s
	[INFO] 10.244.0.3:55425 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080184s
	[INFO] 10.244.0.3:58084 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119904s
	[INFO] 10.244.0.3:57892 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049065s
	[INFO] 10.244.0.3:52329 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000049128s
	[INFO] 10.244.0.3:60384 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000083319s
	[INFO] 10.244.0.3:51923 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000058598s
	[INFO] 10.244.0.3:37985 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00007256s
	[INFO] 10.244.0.3:45792 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000071025s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-985000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-985000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=multinode-985000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T16_21_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:21:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-985000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:39:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:37:57 +0000   Mon, 05 Aug 2024 23:21:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:37:57 +0000   Mon, 05 Aug 2024 23:21:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:37:57 +0000   Mon, 05 Aug 2024 23:21:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:37:57 +0000   Mon, 05 Aug 2024 23:37:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.13
	  Hostname:    multinode-985000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 b981b6a36d124fcaadeb3cd3197bf53b
	  System UUID:                3ac6443b-0000-0000-898d-9b152fa64288
	  Boot ID:                    8bf7ffe6-c2c9-4868-8b47-da7da3d15cdf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-44k5g                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 coredns-7db6d8ff4d-fqtll                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     17m
	  kube-system                 etcd-multinode-985000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kindnet-tvtvg                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	  kube-system                 kube-apiserver-multinode-985000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-multinode-985000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-fwgw7                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-multinode-985000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 17m                kube-proxy       
	  Normal  Starting                 94s                kube-proxy       
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node multinode-985000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node multinode-985000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node multinode-985000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    18m                kubelet          Node multinode-985000 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m                kubelet          Node multinode-985000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     18m                kubelet          Node multinode-985000 status is now: NodeHasSufficientPID
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           17m                node-controller  Node multinode-985000 event: Registered Node multinode-985000 in Controller
	  Normal  NodeReady                17m                kubelet          Node multinode-985000 status is now: NodeReady
	  Normal  Starting                 99s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  99s (x8 over 99s)  kubelet          Node multinode-985000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    99s (x8 over 99s)  kubelet          Node multinode-985000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     99s (x7 over 99s)  kubelet          Node multinode-985000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  99s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           83s                node-controller  Node multinode-985000 event: Registered Node multinode-985000 in Controller
	
	
	Name:               multinode-985000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-985000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=multinode-985000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_08_05T16_35_55_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:35:55 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-985000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:36:46 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 05 Aug 2024 23:36:09 +0000   Mon, 05 Aug 2024 23:38:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 05 Aug 2024 23:36:09 +0000   Mon, 05 Aug 2024 23:38:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 05 Aug 2024 23:36:09 +0000   Mon, 05 Aug 2024 23:38:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 05 Aug 2024 23:36:09 +0000   Mon, 05 Aug 2024 23:38:35 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.15
	  Hostname:    multinode-985000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 de33b8a09ea841548571815588d91336
	  System UUID:                f79c425f-0000-0000-b959-1b18fd31916b
	  Boot ID:                    a263d4fd-5a9a-4e6d-b9a5-6d8b00715c16
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-p2wf9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m27s
	  kube-system                 kindnet-5kfjr              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m29s
	  kube-system                 kube-proxy-s65dd           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m23s                  kube-proxy       
	  Normal  Starting                 3m21s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  4m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m30s (x2 over 4m30s)  kubelet          Node multinode-985000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m30s (x2 over 4m30s)  kubelet          Node multinode-985000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m30s (x2 over 4m30s)  kubelet          Node multinode-985000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m7s                   kubelet          Node multinode-985000-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m24s (x2 over 3m24s)  kubelet          Node multinode-985000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m24s (x2 over 3m24s)  kubelet          Node multinode-985000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m24s (x2 over 3m24s)  kubelet          Node multinode-985000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m19s                  node-controller  Node multinode-985000-m03 event: Registered Node multinode-985000-m03 in Controller
	  Normal  NodeReady                3m9s                   kubelet          Node multinode-985000-m03 status is now: NodeReady
	  Normal  RegisteredNode           83s                    node-controller  Node multinode-985000-m03 event: Registered Node multinode-985000-m03 in Controller
	  Normal  NodeNotReady             43s                    node-controller  Node multinode-985000-m03 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +5.661439] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007055] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.766548] systemd-fstab-generator[126]: Ignoring "noauto" option for root device
	[  +2.232761] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000003] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.602536] systemd-fstab-generator[465]: Ignoring "noauto" option for root device
	[  +0.108699] systemd-fstab-generator[477]: Ignoring "noauto" option for root device
	[  +1.844656] systemd-fstab-generator[832]: Ignoring "noauto" option for root device
	[  +0.244366] systemd-fstab-generator[869]: Ignoring "noauto" option for root device
	[  +0.093826] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.056002] kauditd_printk_skb: 123 callbacks suppressed
	[  +0.061114] systemd-fstab-generator[895]: Ignoring "noauto" option for root device
	[  +2.459899] systemd-fstab-generator[1111]: Ignoring "noauto" option for root device
	[  +0.103560] systemd-fstab-generator[1123]: Ignoring "noauto" option for root device
	[  +0.100329] systemd-fstab-generator[1135]: Ignoring "noauto" option for root device
	[  +0.122414] systemd-fstab-generator[1150]: Ignoring "noauto" option for root device
	[  +0.416040] systemd-fstab-generator[1279]: Ignoring "noauto" option for root device
	[  +1.958427] systemd-fstab-generator[1414]: Ignoring "noauto" option for root device
	[  +0.064860] kauditd_printk_skb: 180 callbacks suppressed
	[  +5.001373] kauditd_printk_skb: 90 callbacks suppressed
	[  +2.036850] systemd-fstab-generator[2247]: Ignoring "noauto" option for root device
	[  +8.657009] kauditd_printk_skb: 42 callbacks suppressed
	[Aug 5 23:38] kauditd_printk_skb: 16 callbacks suppressed
	
	
	==> etcd [1fdd85b796ab] <==
	{"level":"info","ts":"2024-08-05T23:21:02.852037Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:21:02.855611Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	{"level":"info","ts":"2024-08-05T23:21:02.856003Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.856059Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.85615Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.863221Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:21:02.86336Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T23:21:02.863406Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T23:21:02.864495Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-05T23:31:02.914901Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":684}
	{"level":"info","ts":"2024-08-05T23:31:02.918154Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":684,"took":"2.558785ms","hash":2682644219,"current-db-size-bytes":2088960,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2088960,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-08-05T23:31:02.918199Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2682644219,"revision":684,"compact-revision":-1}
	{"level":"info","ts":"2024-08-05T23:36:02.919565Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":925}
	{"level":"info","ts":"2024-08-05T23:36:02.920973Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":925,"took":"1.036284ms","hash":3918561037,"current-db-size-bytes":2088960,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1814528,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-08-05T23:36:02.921075Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3918561037,"revision":925,"compact-revision":684}
	{"level":"info","ts":"2024-08-05T23:37:11.447748Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-05T23:37:11.447778Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-985000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.13:2380"],"advertise-client-urls":["https://192.169.0.13:2379"]}
	{"level":"warn","ts":"2024-08-05T23:37:11.447827Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T23:37:11.447882Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T23:37:11.491519Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.13:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T23:37:11.491562Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.13:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-05T23:37:11.493311Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e0290fa3161c5471","current-leader-member-id":"e0290fa3161c5471"}
	{"level":"info","ts":"2024-08-05T23:37:11.498118Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2024-08-05T23:37:11.498186Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2024-08-05T23:37:11.498193Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-985000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.13:2380"],"advertise-client-urls":["https://192.169.0.13:2379"]}
	
	
	==> etcd [ee05acb4726f] <==
	{"level":"info","ts":"2024-08-05T23:37:40.599067Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T23:37:40.599077Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T23:37:40.599334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 switched to configuration voters=(16152458731666035825)"}
	{"level":"info","ts":"2024-08-05T23:37:40.599394Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","added-peer-id":"e0290fa3161c5471","added-peer-peer-urls":["https://192.169.0.13:2380"]}
	{"level":"info","ts":"2024-08-05T23:37:40.59965Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:37:40.599742Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:37:40.604814Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-05T23:37:40.605055Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"e0290fa3161c5471","initial-advertise-peer-urls":["https://192.169.0.13:2380"],"listen-peer-urls":["https://192.169.0.13:2380"],"advertise-client-urls":["https://192.169.0.13:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.13:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-05T23:37:40.605095Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-05T23:37:40.605211Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2024-08-05T23:37:40.605239Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2024-08-05T23:37:41.689469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-05T23:37:41.689514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-05T23:37:41.689535Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgPreVoteResp from e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-08-05T23:37:41.689547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became candidate at term 3"}
	{"level":"info","ts":"2024-08-05T23:37:41.689571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgVoteResp from e0290fa3161c5471 at term 3"}
	{"level":"info","ts":"2024-08-05T23:37:41.68958Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became leader at term 3"}
	{"level":"info","ts":"2024-08-05T23:37:41.689585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e0290fa3161c5471 elected leader e0290fa3161c5471 at term 3"}
	{"level":"info","ts":"2024-08-05T23:37:41.690625Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e0290fa3161c5471","local-member-attributes":"{Name:multinode-985000 ClientURLs:[https://192.169.0.13:2379]}","request-path":"/0/members/e0290fa3161c5471/attributes","cluster-id":"87b46e718846f146","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T23:37:41.690781Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:37:41.690883Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:37:41.691356Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T23:37:41.691386Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T23:37:41.692361Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-05T23:37:41.700262Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	
	
	==> kernel <==
	 23:39:18 up 1 min,  0 users,  load average: 0.16, 0.12, 0.04
	Linux multinode-985000 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [724e5cfab0a2] <==
	I0805 23:36:04.991992       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.169.0.15 Flags: [] Table: 0} 
	I0805 23:36:14.989579       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:36:14.989997       1 main.go:299] handling current node
	I0805 23:36:14.990198       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:36:14.990433       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:36:24.988684       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:36:24.988821       1 main.go:299] handling current node
	I0805 23:36:24.988872       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:36:24.988911       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:36:34.988817       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:36:34.988909       1 main.go:299] handling current node
	I0805 23:36:34.988935       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:36:34.988949       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:36:44.992669       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:36:44.992745       1 main.go:299] handling current node
	I0805 23:36:44.992779       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:36:44.992802       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:36:54.996793       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:36:54.996835       1 main.go:299] handling current node
	I0805 23:36:54.996848       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:36:54.996853       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:37:04.997759       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:37:04.997893       1 main.go:299] handling current node
	I0805 23:37:04.998013       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:37:04.998174       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [d5738d55fecd] <==
	I0805 23:38:15.466366       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:38:25.473920       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:38:25.474153       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:38:25.474420       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:38:25.474574       1 main.go:299] handling current node
	I0805 23:38:35.465082       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:38:35.465206       1 main.go:299] handling current node
	I0805 23:38:35.465224       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:38:35.465233       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:38:45.465468       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:38:45.465540       1 main.go:299] handling current node
	I0805 23:38:45.465559       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:38:45.465568       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:38:55.473477       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:38:55.473827       1 main.go:299] handling current node
	I0805 23:38:55.475737       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:38:55.475768       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:39:05.475062       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:39:05.475239       1 main.go:299] handling current node
	I0805 23:39:05.475320       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:39:05.475396       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:39:15.472359       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:39:15.472586       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:39:15.472820       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:39:15.472919       1 main.go:299] handling current node
	
	
	==> kube-apiserver [608878b33f35] <==
	W0805 23:37:11.486438       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.486583       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.486625       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.486650       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.486674       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.486898       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.486927       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.487716       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.487755       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.487780       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.487847       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.487875       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.489041       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.489104       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.489127       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.489147       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.489171       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.489257       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.489281       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.489307       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.489633       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.489864       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.489935       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.490056       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0805 23:37:11.514946       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-apiserver [92bdde18e9bc] <==
	I0805 23:37:42.730543       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0805 23:37:42.736278       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0805 23:37:42.737676       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0805 23:37:42.738333       1 shared_informer.go:320] Caches are synced for configmaps
	I0805 23:37:42.738384       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0805 23:37:42.738390       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0805 23:37:42.739302       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0805 23:37:42.741676       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0805 23:37:42.741754       1 aggregator.go:165] initial CRD sync complete...
	I0805 23:37:42.741787       1 autoregister_controller.go:141] Starting autoregister controller
	I0805 23:37:42.741831       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0805 23:37:42.741875       1 cache.go:39] Caches are synced for autoregister controller
	E0805 23:37:42.744121       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0805 23:37:42.798361       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0805 23:37:42.804367       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0805 23:37:42.804860       1 policy_source.go:224] refreshing policies
	I0805 23:37:42.821782       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0805 23:37:43.633884       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0805 23:37:44.781620       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0805 23:37:44.898279       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0805 23:37:44.905563       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0805 23:37:44.945734       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0805 23:37:44.950191       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0805 23:37:55.099564       1 controller.go:615] quota admission added evaluator for: endpoints
	I0805 23:37:55.156540       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [b348fa62c4a5] <==
	I0805 23:37:55.195810       1 shared_informer.go:320] Caches are synced for crt configmap
	I0805 23:37:55.227474       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0805 23:37:55.228683       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0805 23:37:55.228726       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0805 23:37:55.228925       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0805 23:37:55.237882       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0805 23:37:55.255771       1 shared_informer.go:320] Caches are synced for PVC protection
	I0805 23:37:55.263474       1 shared_informer.go:320] Caches are synced for attach detach
	I0805 23:37:55.298454       1 shared_informer.go:320] Caches are synced for ephemeral
	I0805 23:37:55.302814       1 shared_informer.go:320] Caches are synced for resource quota
	I0805 23:37:55.314458       1 shared_informer.go:320] Caches are synced for expand
	I0805 23:37:55.338263       1 shared_informer.go:320] Caches are synced for stateful set
	I0805 23:37:55.343814       1 shared_informer.go:320] Caches are synced for resource quota
	I0805 23:37:55.345575       1 shared_informer.go:320] Caches are synced for persistent volume
	I0805 23:37:55.730758       1 shared_informer.go:320] Caches are synced for garbage collector
	I0805 23:37:55.734111       1 shared_informer.go:320] Caches are synced for garbage collector
	I0805 23:37:55.734173       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0805 23:37:57.213036       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-985000-m03"
	I0805 23:38:00.018589       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="34.728µs"
	I0805 23:38:00.035169       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.837404ms"
	I0805 23:38:00.036511       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.223µs"
	I0805 23:38:01.038943       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.071233ms"
	I0805 23:38:01.039751       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.639µs"
	I0805 23:38:35.241010       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.858922ms"
	I0805 23:38:35.241084       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.645µs"
	
	
	==> kube-controller-manager [d11865076c64] <==
	I0805 23:22:59.132399       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.529µs"
	I0805 23:34:49.118620       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-985000-m03\" does not exist"
	I0805 23:34:49.123685       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-985000-m03" podCIDRs=["10.244.1.0/24"]
	I0805 23:34:49.553799       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-985000-m03"
	I0805 23:35:12.244278       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-985000-m03"
	I0805 23:35:12.252224       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.969µs"
	I0805 23:35:12.259725       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.754µs"
	I0805 23:35:14.267796       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.716009ms"
	I0805 23:35:14.267862       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.069µs"
	I0805 23:35:51.179064       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.106041ms"
	I0805 23:35:51.195857       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.438177ms"
	I0805 23:35:51.211043       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.139069ms"
	I0805 23:35:51.211379       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="291.666µs"
	I0805 23:35:55.268521       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-985000-m03\" does not exist"
	I0805 23:35:55.272637       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-985000-m03" podCIDRs=["10.244.2.0/24"]
	I0805 23:35:57.161739       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.697µs"
	I0805 23:36:10.485777       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-985000-m03"
	I0805 23:36:10.496807       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="88.532µs"
	I0805 23:36:19.181053       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.67µs"
	I0805 23:36:19.184540       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.764µs"
	I0805 23:36:19.191433       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.037µs"
	I0805 23:36:19.365196       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.813µs"
	I0805 23:36:19.367176       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.532µs"
	I0805 23:36:20.387745       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.044943ms"
	I0805 23:36:20.388000       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.528µs"
	
	
	==> kube-proxy [413cda260d21] <==
	I0805 23:37:44.324911       1 server_linux.go:69] "Using iptables proxy"
	I0805 23:37:44.341877       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.13"]
	I0805 23:37:44.398640       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 23:37:44.398662       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 23:37:44.398675       1 server_linux.go:165] "Using iptables Proxier"
	I0805 23:37:44.401178       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 23:37:44.401588       1 server.go:872] "Version info" version="v1.30.3"
	I0805 23:37:44.401598       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:37:44.402850       1 config.go:192] "Starting service config controller"
	I0805 23:37:44.403035       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 23:37:44.403115       1 config.go:101] "Starting endpoint slice config controller"
	I0805 23:37:44.403158       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 23:37:44.403823       1 config.go:319] "Starting node config controller"
	I0805 23:37:44.404599       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 23:37:44.505447       1 shared_informer.go:320] Caches are synced for node config
	I0805 23:37:44.505492       1 shared_informer.go:320] Caches are synced for service config
	I0805 23:37:44.505525       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [d58ca48f9f8b] <==
	I0805 23:21:21.029929       1 server_linux.go:69] "Using iptables proxy"
	I0805 23:21:21.072929       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.13"]
	I0805 23:21:21.105532       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 23:21:21.105552       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 23:21:21.105563       1 server_linux.go:165] "Using iptables Proxier"
	I0805 23:21:21.107493       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 23:21:21.107594       1 server.go:872] "Version info" version="v1.30.3"
	I0805 23:21:21.107602       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:21:21.108477       1 config.go:192] "Starting service config controller"
	I0805 23:21:21.108482       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 23:21:21.108492       1 config.go:101] "Starting endpoint slice config controller"
	I0805 23:21:21.108494       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 23:21:21.108784       1 config.go:319] "Starting node config controller"
	I0805 23:21:21.108789       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 23:21:21.209420       1 shared_informer.go:320] Caches are synced for node config
	I0805 23:21:21.209474       1 shared_informer.go:320] Caches are synced for service config
	I0805 23:21:21.209501       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [792feba1a6f6] <==
	E0805 23:21:04.024229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0805 23:21:04.024017       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 23:21:04.024329       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 23:21:04.024047       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 23:21:04.024362       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0805 23:21:04.024118       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 23:21:04.024431       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0805 23:21:04.860871       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:04.861069       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0805 23:21:04.959895       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0805 23:21:04.959949       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0805 23:21:04.962444       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 23:21:04.962496       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0805 23:21:04.968410       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 23:21:04.968452       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 23:21:05.030527       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0805 23:21:05.030566       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 23:21:05.076451       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:05.076659       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0805 23:21:05.118159       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:05.118676       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0805 23:21:05.141945       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 23:21:05.142020       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0805 23:21:08.218627       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0805 23:37:11.443644       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ff391cbc1ee5] <==
	I0805 23:37:40.960901       1 serving.go:380] Generated self-signed cert in-memory
	W0805 23:37:42.679762       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0805 23:37:42.679944       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 23:37:42.680026       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0805 23:37:42.680120       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0805 23:37:42.720120       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0805 23:37:42.720155       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:37:42.722970       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0805 23:37:42.723116       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0805 23:37:42.722988       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0805 23:37:42.723009       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0805 23:37:42.824314       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 05 23:37:47 multinode-985000 kubelet[1421]: E0805 23:37:47.653173    1421 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-fqtll" podUID="4d8af129-475b-4185-8b0d-cbda67812964"
	Aug 05 23:37:47 multinode-985000 kubelet[1421]: E0805 23:37:47.654274    1421 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-44k5g" podUID="63691d17-216a-4893-8285-dbaf6269eced"
	Aug 05 23:37:49 multinode-985000 kubelet[1421]: E0805 23:37:49.653930    1421 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-44k5g" podUID="63691d17-216a-4893-8285-dbaf6269eced"
	Aug 05 23:37:49 multinode-985000 kubelet[1421]: E0805 23:37:49.654181    1421 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-fqtll" podUID="4d8af129-475b-4185-8b0d-cbda67812964"
	Aug 05 23:37:51 multinode-985000 kubelet[1421]: E0805 23:37:51.243146    1421 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Aug 05 23:37:51 multinode-985000 kubelet[1421]: E0805 23:37:51.243592    1421 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4d8af129-475b-4185-8b0d-cbda67812964-config-volume podName:4d8af129-475b-4185-8b0d-cbda67812964 nodeName:}" failed. No retries permitted until 2024-08-05 23:37:59.243577968 +0000 UTC m=+19.748233082 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/4d8af129-475b-4185-8b0d-cbda67812964-config-volume") pod "coredns-7db6d8ff4d-fqtll" (UID: "4d8af129-475b-4185-8b0d-cbda67812964") : object "kube-system"/"coredns" not registered
	Aug 05 23:37:51 multinode-985000 kubelet[1421]: E0805 23:37:51.343990    1421 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Aug 05 23:37:51 multinode-985000 kubelet[1421]: E0805 23:37:51.344160    1421 projected.go:200] Error preparing data for projected volume kube-api-access-qxrlf for pod default/busybox-fc5497c4f-44k5g: object "default"/"kube-root-ca.crt" not registered
	Aug 05 23:37:51 multinode-985000 kubelet[1421]: E0805 23:37:51.344349    1421 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/63691d17-216a-4893-8285-dbaf6269eced-kube-api-access-qxrlf podName:63691d17-216a-4893-8285-dbaf6269eced nodeName:}" failed. No retries permitted until 2024-08-05 23:37:59.344325481 +0000 UTC m=+19.848980605 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-qxrlf" (UniqueName: "kubernetes.io/projected/63691d17-216a-4893-8285-dbaf6269eced-kube-api-access-qxrlf") pod "busybox-fc5497c4f-44k5g" (UID: "63691d17-216a-4893-8285-dbaf6269eced") : object "default"/"kube-root-ca.crt" not registered
	Aug 05 23:37:51 multinode-985000 kubelet[1421]: E0805 23:37:51.652559    1421 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-fqtll" podUID="4d8af129-475b-4185-8b0d-cbda67812964"
	Aug 05 23:37:51 multinode-985000 kubelet[1421]: E0805 23:37:51.653705    1421 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-44k5g" podUID="63691d17-216a-4893-8285-dbaf6269eced"
	Aug 05 23:37:53 multinode-985000 kubelet[1421]: E0805 23:37:53.653376    1421 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-fqtll" podUID="4d8af129-475b-4185-8b0d-cbda67812964"
	Aug 05 23:37:53 multinode-985000 kubelet[1421]: E0805 23:37:53.653903    1421 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-44k5g" podUID="63691d17-216a-4893-8285-dbaf6269eced"
	Aug 05 23:37:55 multinode-985000 kubelet[1421]: E0805 23:37:55.654495    1421 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-fqtll" podUID="4d8af129-475b-4185-8b0d-cbda67812964"
	Aug 05 23:37:55 multinode-985000 kubelet[1421]: E0805 23:37:55.654798    1421 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-44k5g" podUID="63691d17-216a-4893-8285-dbaf6269eced"
	Aug 05 23:37:57 multinode-985000 kubelet[1421]: I0805 23:37:57.206744    1421 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	Aug 05 23:38:15 multinode-985000 kubelet[1421]: I0805 23:38:15.134093    1421 scope.go:117] "RemoveContainer" containerID="3d9fd612d0b14777e3c2f36e84aa669c6aba33c9885ee2054f4dc5d9183e18fa"
	Aug 05 23:38:15 multinode-985000 kubelet[1421]: I0805 23:38:15.134335    1421 scope.go:117] "RemoveContainer" containerID="0d0f4c86d1e8c797cb0c58d08f505521679191138c65b7051df09ccf4e702a25"
	Aug 05 23:38:15 multinode-985000 kubelet[1421]: E0805 23:38:15.134437    1421 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(72ec8458-5c62-43eb-9120-0146e6ccaf8f)\"" pod="kube-system/storage-provisioner" podUID="72ec8458-5c62-43eb-9120-0146e6ccaf8f"
	Aug 05 23:38:27 multinode-985000 kubelet[1421]: I0805 23:38:27.652833    1421 scope.go:117] "RemoveContainer" containerID="0d0f4c86d1e8c797cb0c58d08f505521679191138c65b7051df09ccf4e702a25"
	Aug 05 23:38:39 multinode-985000 kubelet[1421]: E0805 23:38:39.676906    1421 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:38:39 multinode-985000 kubelet[1421]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:38:39 multinode-985000 kubelet[1421]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:38:39 multinode-985000 kubelet[1421]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:38:39 multinode-985000 kubelet[1421]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-985000 -n multinode-985000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-985000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (146.07s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (157.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 node delete m03
E0805 16:39:22.249382    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0805 16:41:19.196049    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0805 16:41:50.599139    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
multinode_test.go:416: (dbg) Done: out/minikube-darwin-amd64 -p multinode-985000 node delete m03: (2m33.742034403s)
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-985000 status --alsologtostderr: exit status 2 (244.508172ms)

                                                
                                                
-- stdout --
	multinode-985000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-985000-m02
	type: Worker
	host: Running
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:41:54.261487    5620 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:41:54.262174    5620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:41:54.262183    5620 out.go:304] Setting ErrFile to fd 2...
	I0805 16:41:54.262189    5620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:41:54.262681    5620 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:41:54.262879    5620 out.go:298] Setting JSON to false
	I0805 16:41:54.262905    5620 mustload.go:65] Loading cluster: multinode-985000
	I0805 16:41:54.262942    5620 notify.go:220] Checking for updates...
	I0805 16:41:54.263193    5620 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:41:54.263210    5620 status.go:255] checking status of multinode-985000 ...
	I0805 16:41:54.263555    5620 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:41:54.263597    5620 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:41:54.272363    5620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53212
	I0805 16:41:54.272733    5620 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:41:54.273137    5620 main.go:141] libmachine: Using API Version  1
	I0805 16:41:54.273156    5620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:41:54.273385    5620 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:41:54.273512    5620 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:41:54.273596    5620 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:41:54.273674    5620 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 5533
	I0805 16:41:54.274628    5620 status.go:330] multinode-985000 host status = "Running" (err=<nil>)
	I0805 16:41:54.274649    5620 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:41:54.274902    5620 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:41:54.274923    5620 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:41:54.283293    5620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53214
	I0805 16:41:54.283625    5620 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:41:54.283945    5620 main.go:141] libmachine: Using API Version  1
	I0805 16:41:54.283972    5620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:41:54.284207    5620 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:41:54.284327    5620 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:41:54.284410    5620 host.go:66] Checking if "multinode-985000" exists ...
	I0805 16:41:54.284670    5620 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:41:54.284696    5620 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:41:54.292980    5620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53217
	I0805 16:41:54.293277    5620 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:41:54.293614    5620 main.go:141] libmachine: Using API Version  1
	I0805 16:41:54.293629    5620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:41:54.293832    5620 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:41:54.293942    5620 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:41:54.294115    5620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:41:54.294136    5620 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:41:54.294207    5620 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:41:54.294306    5620 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:41:54.294381    5620 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:41:54.294459    5620 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:41:54.329525    5620 ssh_runner.go:195] Run: systemctl --version
	I0805 16:41:54.334021    5620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:41:54.346088    5620 kubeconfig.go:125] found "multinode-985000" server: "https://192.169.0.13:8443"
	I0805 16:41:54.346112    5620 api_server.go:166] Checking apiserver status ...
	I0805 16:41:54.346150    5620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:41:54.357661    5620 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1713/cgroup
	W0805 16:41:54.365825    5620 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1713/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 16:41:54.365871    5620 ssh_runner.go:195] Run: ls
	I0805 16:41:54.369288    5620 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:41:54.374282    5620 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0805 16:41:54.374292    5620 status.go:422] multinode-985000 apiserver status = Running (err=<nil>)
	I0805 16:41:54.374301    5620 status.go:257] multinode-985000 status: &{Name:multinode-985000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:41:54.374314    5620 status.go:255] checking status of multinode-985000-m02 ...
	I0805 16:41:54.374579    5620 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:41:54.374599    5620 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:41:54.383381    5620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53221
	I0805 16:41:54.383709    5620 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:41:54.384067    5620 main.go:141] libmachine: Using API Version  1
	I0805 16:41:54.384083    5620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:41:54.384328    5620 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:41:54.384432    5620 main.go:141] libmachine: (multinode-985000-m02) Calling .GetState
	I0805 16:41:54.384520    5620 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:41:54.384608    5620 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 5546
	I0805 16:41:54.385559    5620 status.go:330] multinode-985000-m02 host status = "Running" (err=<nil>)
	I0805 16:41:54.385568    5620 host.go:66] Checking if "multinode-985000-m02" exists ...
	I0805 16:41:54.385820    5620 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:41:54.385843    5620 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:41:54.394304    5620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53223
	I0805 16:41:54.394610    5620 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:41:54.394922    5620 main.go:141] libmachine: Using API Version  1
	I0805 16:41:54.394930    5620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:41:54.395159    5620 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:41:54.395279    5620 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:41:54.395379    5620 host.go:66] Checking if "multinode-985000-m02" exists ...
	I0805 16:41:54.395637    5620 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:41:54.395660    5620 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:41:54.404015    5620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53225
	I0805 16:41:54.404354    5620 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:41:54.404687    5620 main.go:141] libmachine: Using API Version  1
	I0805 16:41:54.404698    5620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:41:54.404909    5620 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:41:54.405032    5620 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:41:54.405156    5620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:41:54.405168    5620 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:41:54.405238    5620 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:41:54.405352    5620 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:41:54.405436    5620 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:41:54.405544    5620 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:41:54.439488    5620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:41:54.450723    5620 status.go:257] multinode-985000-m02 status: &{Name:multinode-985000-m02 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-amd64 -p multinode-985000 status --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000
helpers_test.go:244: <<< TestMultiNode/serial/DeleteNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-985000 logs -n 25: (2.717501456s)
helpers_test.go:252: TestMultiNode/serial/DeleteNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| kubectl | -p multinode-985000 -- get pods -o   | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o   | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o   | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o   | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o   | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:33 PDT | 05 Aug 24 16:33 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o   | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o   | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec          | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g --           |                  |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec          | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT |                     |
	|         | busybox-fc5497c4f-ptd5b --           |                  |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec          | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g --           |                  |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec          | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT |                     |
	|         | busybox-fc5497c4f-ptd5b --           |                  |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec          | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g -- nslookup  |                  |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec          | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT |                     |
	|         | busybox-fc5497c4f-ptd5b -- nslookup  |                  |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- get pods -o   | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec          | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g              |                  |         |         |                     |                     |
	|         | -- sh -c nslookup                    |                  |         |         |                     |                     |
	|         | host.minikube.internal | awk         |                  |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec          | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:34 PDT |
	|         | busybox-fc5497c4f-44k5g -- sh        |                  |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |                  |         |         |                     |                     |
	| kubectl | -p multinode-985000 -- exec          | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT |                     |
	|         | busybox-fc5497c4f-ptd5b              |                  |         |         |                     |                     |
	|         | -- sh -c nslookup                    |                  |         |         |                     |                     |
	|         | host.minikube.internal | awk         |                  |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                  |         |         |                     |                     |
	| node    | add -p multinode-985000 -v 3         | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:34 PDT | 05 Aug 24 16:35 PDT |
	|         | --alsologtostderr                    |                  |         |         |                     |                     |
	| node    | multinode-985000 node stop m03       | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:35 PDT | 05 Aug 24 16:35 PDT |
	| node    | multinode-985000 node start          | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:35 PDT | 05 Aug 24 16:36 PDT |
	|         | m03 -v=7 --alsologtostderr           |                  |         |         |                     |                     |
	| node    | list -p multinode-985000             | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:36 PDT |                     |
	| stop    | -p multinode-985000                  | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:36 PDT | 05 Aug 24 16:37 PDT |
	| start   | -p multinode-985000                  | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:37 PDT |                     |
	|         | --wait=true -v=8                     |                  |         |         |                     |                     |
	|         | --alsologtostderr                    |                  |         |         |                     |                     |
	| node    | list -p multinode-985000             | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:39 PDT |                     |
	| node    | multinode-985000 node delete         | multinode-985000 | jenkins | v1.33.1 | 05 Aug 24 16:39 PDT | 05 Aug 24 16:41 PDT |
	|         | m03                                  |                  |         |         |                     |                     |
	|---------|--------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 16:37:19
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 16:37:19.344110    5521 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:37:19.344466    5521 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:37:19.344474    5521 out.go:304] Setting ErrFile to fd 2...
	I0805 16:37:19.344479    5521 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:37:19.344702    5521 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:37:19.346290    5521 out.go:298] Setting JSON to false
	I0805 16:37:19.368484    5521 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":4010,"bootTime":1722897029,"procs":431,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0805 16:37:19.368574    5521 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:37:19.390244    5521 out.go:177] * [multinode-985000] minikube v1.33.1 on Darwin 14.5
	I0805 16:37:19.432083    5521 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:37:19.432145    5521 notify.go:220] Checking for updates...
	I0805 16:37:19.474965    5521 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:37:19.495989    5521 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0805 16:37:19.517187    5521 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:37:19.537983    5521 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:37:19.558962    5521 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:37:19.580823    5521 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:37:19.580992    5521 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:37:19.581649    5521 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:37:19.581721    5521 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:37:19.591086    5521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53115
	I0805 16:37:19.591452    5521 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:37:19.591907    5521 main.go:141] libmachine: Using API Version  1
	I0805 16:37:19.591915    5521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:37:19.592186    5521 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:37:19.592316    5521 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:37:19.621203    5521 out.go:177] * Using the hyperkit driver based on existing profile
	I0805 16:37:19.663060    5521 start.go:297] selected driver: hyperkit
	I0805 16:37:19.663084    5521 start.go:901] validating driver "hyperkit" against &{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.15 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:f
alse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binary
Mirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:37:19.663335    5521 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:37:19.663521    5521 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:37:19.663719    5521 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19373-1122/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0805 16:37:19.672949    5521 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0805 16:37:19.676917    5521 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:37:19.676939    5521 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0805 16:37:19.679650    5521 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:37:19.679719    5521 cni.go:84] Creating CNI manager for ""
	I0805 16:37:19.679731    5521 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0805 16:37:19.679807    5521 start.go:340] cluster config:
	{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.15 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:
false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:37:19.679904    5521 iso.go:125] acquiring lock: {Name:mk71e8d40232ece83c91dc82184f03ab93aee56e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:37:19.721789    5521 out.go:177] * Starting "multinode-985000" primary control-plane node in "multinode-985000" cluster
	I0805 16:37:19.742954    5521 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:37:19.743026    5521 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0805 16:37:19.743048    5521 cache.go:56] Caching tarball of preloaded images
	I0805 16:37:19.743247    5521 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:37:19.743265    5521 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:37:19.743456    5521 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:37:19.744298    5521 start.go:360] acquireMachinesLock for multinode-985000: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:37:19.744469    5521 start.go:364] duration metric: took 148.41µs to acquireMachinesLock for "multinode-985000"
	I0805 16:37:19.744508    5521 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:37:19.744520    5521 fix.go:54] fixHost starting: 
	I0805 16:37:19.744954    5521 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:37:19.744979    5521 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:37:19.753692    5521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53117
	I0805 16:37:19.754053    5521 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:37:19.754374    5521 main.go:141] libmachine: Using API Version  1
	I0805 16:37:19.754383    5521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:37:19.754660    5521 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:37:19.754807    5521 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:37:19.754921    5521 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:37:19.755005    5521 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:37:19.755109    5521 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 4651
	I0805 16:37:19.755997    5521 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid 4651 missing from process table
	I0805 16:37:19.756024    5521 fix.go:112] recreateIfNeeded on multinode-985000: state=Stopped err=<nil>
	I0805 16:37:19.756039    5521 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	W0805 16:37:19.756134    5521 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:37:19.797962    5521 out.go:177] * Restarting existing hyperkit VM for "multinode-985000" ...
	I0805 16:37:19.821296    5521 main.go:141] libmachine: (multinode-985000) Calling .Start
	I0805 16:37:19.821573    5521 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:37:19.821663    5521 main.go:141] libmachine: (multinode-985000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid
	I0805 16:37:19.823405    5521 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid 4651 missing from process table
	I0805 16:37:19.823427    5521 main.go:141] libmachine: (multinode-985000) DBG | pid 4651 is in state "Stopped"
	I0805 16:37:19.823442    5521 main.go:141] libmachine: (multinode-985000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid...
	I0805 16:37:19.823689    5521 main.go:141] libmachine: (multinode-985000) DBG | Using UUID 3ac698fc-f622-443b-898d-9b152fa64288
	I0805 16:37:19.935040    5521 main.go:141] libmachine: (multinode-985000) DBG | Generated MAC e2:6:14:d2:13:ae
	I0805 16:37:19.935070    5521 main.go:141] libmachine: (multinode-985000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000
	I0805 16:37:19.935187    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3ac698fc-f622-443b-898d-9b152fa64288", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a67e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0805 16:37:19.935220    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3ac698fc-f622-443b-898d-9b152fa64288", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a67e0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0805 16:37:19.935274    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "3ac698fc-f622-443b-898d-9b152fa64288", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/multinode-985000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage,/Users/jenkins/minikube-integration/1937
3-1122/.minikube/machines/multinode-985000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"}
	I0805 16:37:19.935303    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 3ac698fc-f622-443b-898d-9b152fa64288 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/multinode-985000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"
	I0805 16:37:19.935323    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:37:19.936734    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 DEBUG: hyperkit: Pid is 5533
	I0805 16:37:19.937092    5521 main.go:141] libmachine: (multinode-985000) DBG | Attempt 0
	I0805 16:37:19.937106    5521 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:37:19.937205    5521 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 5533
	I0805 16:37:19.939053    5521 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:37:19.939115    5521 main.go:141] libmachine: (multinode-985000) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I0805 16:37:19.939146    5521 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:37:19.939167    5521 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b00c}
	I0805 16:37:19.939179    5521 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2afca}
	I0805 16:37:19.939190    5521 main.go:141] libmachine: (multinode-985000) DBG | Found match: e2:6:14:d2:13:ae
	I0805 16:37:19.939202    5521 main.go:141] libmachine: (multinode-985000) DBG | IP: 192.169.0.13
	I0805 16:37:19.939251    5521 main.go:141] libmachine: (multinode-985000) Calling .GetConfigRaw
	I0805 16:37:19.939918    5521 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:37:19.940105    5521 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:37:19.940507    5521 machine.go:94] provisionDockerMachine start ...
	I0805 16:37:19.940521    5521 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:37:19.940712    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:19.940833    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:19.940944    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:19.941063    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:19.941184    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:19.941317    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:37:19.941534    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:37:19.941543    5521 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 16:37:19.945439    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:37:19.998236    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:37:19.999189    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:37:19.999209    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:37:19.999217    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:37:19.999225    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:19 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:37:20.381357    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:37:20.381372    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:37:20.495827    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:37:20.495847    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:37:20.495864    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:37:20.495880    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:20 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:37:20.496727    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:20 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:37:20.496740    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:20 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:37:26.053033    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:26 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0805 16:37:26.053095    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:26 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0805 16:37:26.053106    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:26 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0805 16:37:26.078427    5521 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:37:26 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0805 16:37:31.014343    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 16:37:31.014358    5521 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:37:31.014500    5521 buildroot.go:166] provisioning hostname "multinode-985000"
	I0805 16:37:31.014511    5521 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:37:31.014618    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:31.014720    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:31.014844    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.014943    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.015061    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:31.015194    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:37:31.015348    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:37:31.015359    5521 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-985000 && echo "multinode-985000" | sudo tee /etc/hostname
	I0805 16:37:31.093711    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-985000
	
	I0805 16:37:31.093738    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:31.093873    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:31.093973    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.094065    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.094154    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:31.094291    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:37:31.094436    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:37:31.094447    5521 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-985000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-985000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-985000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:37:31.166381    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:37:31.166401    5521 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:37:31.166420    5521 buildroot.go:174] setting up certificates
	I0805 16:37:31.166425    5521 provision.go:84] configureAuth start
	I0805 16:37:31.166432    5521 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:37:31.166566    5521 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:37:31.166671    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:31.166751    5521 provision.go:143] copyHostCerts
	I0805 16:37:31.166779    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:37:31.166848    5521 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:37:31.166856    5521 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:37:31.167016    5521 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:37:31.167224    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:37:31.167266    5521 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:37:31.167271    5521 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:37:31.167361    5521 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:37:31.167503    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:37:31.167542    5521 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:37:31.167553    5521 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:37:31.167640    5521 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:37:31.167799    5521 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.multinode-985000 san=[127.0.0.1 192.169.0.13 localhost minikube multinode-985000]
	I0805 16:37:31.333929    5521 provision.go:177] copyRemoteCerts
	I0805 16:37:31.333986    5521 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:37:31.334003    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:31.334141    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:31.334246    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.334341    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:31.334442    5521 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:37:31.373502    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:37:31.373592    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:37:31.393275    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:37:31.393333    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0805 16:37:31.412894    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:37:31.412951    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:37:31.432545    5521 provision.go:87] duration metric: took 266.106701ms to configureAuth
	I0805 16:37:31.432558    5521 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:37:31.432725    5521 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:37:31.432742    5521 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:37:31.432881    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:31.432989    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:31.433084    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.433176    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.433269    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:31.433395    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:37:31.433519    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:37:31.433527    5521 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:37:31.498617    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:37:31.498629    5521 buildroot.go:70] root file system type: tmpfs
	I0805 16:37:31.498708    5521 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:37:31.498721    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:31.498863    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:31.498974    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.499071    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.499155    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:31.499273    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:37:31.499401    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:37:31.499448    5521 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:37:31.575743    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:37:31.575771    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:31.575913    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:31.576016    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.576109    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:31.576205    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:31.576341    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:37:31.576481    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:37:31.576493    5521 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:37:33.234695    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:37:33.234711    5521 machine.go:97] duration metric: took 13.294178335s to provisionDockerMachine
	I0805 16:37:33.234727    5521 start.go:293] postStartSetup for "multinode-985000" (driver="hyperkit")
	I0805 16:37:33.234735    5521 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:37:33.234747    5521 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:37:33.234933    5521 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:37:33.234947    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:33.235048    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:33.235138    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:33.235219    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:33.235304    5521 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:37:33.276364    5521 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:37:33.279613    5521 command_runner.go:130] > NAME=Buildroot
	I0805 16:37:33.279624    5521 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0805 16:37:33.279629    5521 command_runner.go:130] > ID=buildroot
	I0805 16:37:33.279635    5521 command_runner.go:130] > VERSION_ID=2023.02.9
	I0805 16:37:33.279641    5521 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0805 16:37:33.279904    5521 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:37:33.279915    5521 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:37:33.280022    5521 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:37:33.280208    5521 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:37:33.280215    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:37:33.280420    5521 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:37:33.289381    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:37:33.319551    5521 start.go:296] duration metric: took 84.814531ms for postStartSetup
	I0805 16:37:33.319580    5521 fix.go:56] duration metric: took 13.575045291s for fixHost
	I0805 16:37:33.319592    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:33.319764    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:33.319879    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:33.319970    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:33.320074    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:33.320209    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:37:33.320347    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:37:33.320353    5521 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 16:37:33.386078    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722901053.539565012
	
	I0805 16:37:33.386090    5521 fix.go:216] guest clock: 1722901053.539565012
	I0805 16:37:33.386095    5521 fix.go:229] Guest: 2024-08-05 16:37:33.539565012 -0700 PDT Remote: 2024-08-05 16:37:33.319583 -0700 PDT m=+14.014329761 (delta=219.982012ms)
	I0805 16:37:33.386114    5521 fix.go:200] guest clock delta is within tolerance: 219.982012ms
	I0805 16:37:33.386118    5521 start.go:83] releasing machines lock for "multinode-985000", held for 13.641620815s
	I0805 16:37:33.386138    5521 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:37:33.386279    5521 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:37:33.386394    5521 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:37:33.386730    5521 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:37:33.386845    5521 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:37:33.386917    5521 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:37:33.386942    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:33.387003    5521 ssh_runner.go:195] Run: cat /version.json
	I0805 16:37:33.387017    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:37:33.387030    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:33.387128    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:37:33.387144    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:33.387234    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:37:33.387245    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:33.387325    5521 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:37:33.387345    5521 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:37:33.387431    5521 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:37:33.421764    5521 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0805 16:37:33.421883    5521 ssh_runner.go:195] Run: systemctl --version
	I0805 16:37:33.467550    5521 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0805 16:37:33.468651    5521 command_runner.go:130] > systemd 252 (252)
	I0805 16:37:33.468690    5521 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0805 16:37:33.468805    5521 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 16:37:33.473715    5521 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0805 16:37:33.473736    5521 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:37:33.473771    5521 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:37:33.487255    5521 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0805 16:37:33.487298    5521 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:37:33.487311    5521 start.go:495] detecting cgroup driver to use...
	I0805 16:37:33.487409    5521 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:37:33.501851    5521 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0805 16:37:33.502107    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:37:33.510909    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:37:33.519656    5521 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:37:33.519696    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:37:33.528321    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:37:33.536918    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:37:33.545942    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:37:33.554600    5521 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:37:33.563425    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:37:33.572074    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:37:33.580764    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:37:33.589491    5521 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:37:33.597187    5521 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0805 16:37:33.597327    5521 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:37:33.605146    5521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:37:33.699080    5521 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:37:33.715293    5521 start.go:495] detecting cgroup driver to use...
	I0805 16:37:33.715372    5521 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:37:33.725461    5521 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0805 16:37:33.725955    5521 command_runner.go:130] > [Unit]
	I0805 16:37:33.725965    5521 command_runner.go:130] > Description=Docker Application Container Engine
	I0805 16:37:33.725969    5521 command_runner.go:130] > Documentation=https://docs.docker.com
	I0805 16:37:33.725974    5521 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0805 16:37:33.725979    5521 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0805 16:37:33.725989    5521 command_runner.go:130] > StartLimitBurst=3
	I0805 16:37:33.725993    5521 command_runner.go:130] > StartLimitIntervalSec=60
	I0805 16:37:33.725997    5521 command_runner.go:130] > [Service]
	I0805 16:37:33.726001    5521 command_runner.go:130] > Type=notify
	I0805 16:37:33.726005    5521 command_runner.go:130] > Restart=on-failure
	I0805 16:37:33.726011    5521 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0805 16:37:33.726019    5521 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0805 16:37:33.726025    5521 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0805 16:37:33.726031    5521 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0805 16:37:33.726036    5521 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0805 16:37:33.726042    5521 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0805 16:37:33.726048    5521 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0805 16:37:33.726063    5521 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0805 16:37:33.726069    5521 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0805 16:37:33.726075    5521 command_runner.go:130] > ExecStart=
	I0805 16:37:33.726090    5521 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0805 16:37:33.726094    5521 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0805 16:37:33.726100    5521 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0805 16:37:33.726107    5521 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0805 16:37:33.726111    5521 command_runner.go:130] > LimitNOFILE=infinity
	I0805 16:37:33.726115    5521 command_runner.go:130] > LimitNPROC=infinity
	I0805 16:37:33.726121    5521 command_runner.go:130] > LimitCORE=infinity
	I0805 16:37:33.726127    5521 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0805 16:37:33.726132    5521 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0805 16:37:33.726137    5521 command_runner.go:130] > TasksMax=infinity
	I0805 16:37:33.726141    5521 command_runner.go:130] > TimeoutStartSec=0
	I0805 16:37:33.726158    5521 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0805 16:37:33.726161    5521 command_runner.go:130] > Delegate=yes
	I0805 16:37:33.726166    5521 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0805 16:37:33.726170    5521 command_runner.go:130] > KillMode=process
	I0805 16:37:33.726173    5521 command_runner.go:130] > [Install]
	I0805 16:37:33.726181    5521 command_runner.go:130] > WantedBy=multi-user.target
	I0805 16:37:33.726297    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:37:33.737088    5521 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:37:33.751275    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:37:33.762646    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:37:33.773482    5521 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:37:33.799587    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:37:33.810018    5521 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:37:33.824851    5521 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0805 16:37:33.825036    5521 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:37:33.828060    5521 command_runner.go:130] > /usr/bin/cri-dockerd
	I0805 16:37:33.828191    5521 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:37:33.835356    5521 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:37:33.848939    5521 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:37:33.941490    5521 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:37:34.038935    5521 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:37:34.039041    5521 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:37:34.053894    5521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:37:34.163116    5521 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:37:36.488671    5521 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.32553387s)
	I0805 16:37:36.488731    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0805 16:37:36.499891    5521 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0805 16:37:36.512512    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:37:36.522638    5521 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0805 16:37:36.618869    5521 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0805 16:37:36.714175    5521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:37:36.811543    5521 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0805 16:37:36.825669    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0805 16:37:36.836762    5521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:37:36.945275    5521 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0805 16:37:37.004002    5521 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0805 16:37:37.004108    5521 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0805 16:37:37.008235    5521 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0805 16:37:37.008254    5521 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0805 16:37:37.008260    5521 command_runner.go:130] > Device: 0,22	Inode: 751         Links: 1
	I0805 16:37:37.008265    5521 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0805 16:37:37.008270    5521 command_runner.go:130] > Access: 2024-08-05 23:37:37.112441730 +0000
	I0805 16:37:37.008274    5521 command_runner.go:130] > Modify: 2024-08-05 23:37:37.112441730 +0000
	I0805 16:37:37.008280    5521 command_runner.go:130] > Change: 2024-08-05 23:37:37.113441659 +0000
	I0805 16:37:37.008283    5521 command_runner.go:130] >  Birth: -
	I0805 16:37:37.008458    5521 start.go:563] Will wait 60s for crictl version
	I0805 16:37:37.008503    5521 ssh_runner.go:195] Run: which crictl
	I0805 16:37:37.011447    5521 command_runner.go:130] > /usr/bin/crictl
	I0805 16:37:37.011673    5521 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0805 16:37:37.037547    5521 command_runner.go:130] > Version:  0.1.0
	I0805 16:37:37.037560    5521 command_runner.go:130] > RuntimeName:  docker
	I0805 16:37:37.037564    5521 command_runner.go:130] > RuntimeVersion:  27.1.1
	I0805 16:37:37.037568    5521 command_runner.go:130] > RuntimeApiVersion:  v1
	I0805 16:37:37.038675    5521 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.1
	RuntimeApiVersion:  v1
	I0805 16:37:37.038749    5521 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:37:37.056467    5521 command_runner.go:130] > 27.1.1
	I0805 16:37:37.057465    5521 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0805 16:37:37.074514    5521 command_runner.go:130] > 27.1.1
	I0805 16:37:37.099565    5521 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.1.1 ...
	I0805 16:37:37.099612    5521 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:37:37.099970    5521 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0805 16:37:37.104644    5521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:37:37.114271    5521 kubeadm.go:883] updating cluster {Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.15 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0805 16:37:37.114369    5521 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:37:37.114424    5521 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:37:37.126439    5521 command_runner.go:130] > kindest/kindnetd:v20240730-75a5af0c
	I0805 16:37:37.126453    5521 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0805 16:37:37.126458    5521 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0805 16:37:37.126462    5521 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0805 16:37:37.126465    5521 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0805 16:37:37.126469    5521 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0805 16:37:37.126473    5521 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0805 16:37:37.126477    5521 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0805 16:37:37.126481    5521 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:37:37.126485    5521 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0805 16:37:37.127412    5521 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240730-75a5af0c
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0805 16:37:37.127420    5521 docker.go:615] Images already preloaded, skipping extraction
	I0805 16:37:37.127486    5521 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0805 16:37:37.146140    5521 command_runner.go:130] > kindest/kindnetd:v20240730-75a5af0c
	I0805 16:37:37.146154    5521 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0805 16:37:37.146159    5521 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0805 16:37:37.146163    5521 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0805 16:37:37.146167    5521 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0805 16:37:37.146170    5521 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0805 16:37:37.146174    5521 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0805 16:37:37.146179    5521 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0805 16:37:37.146182    5521 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0805 16:37:37.146186    5521 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0805 16:37:37.146679    5521 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240730-75a5af0c
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0805 16:37:37.146698    5521 cache_images.go:84] Images are preloaded, skipping loading
	I0805 16:37:37.146707    5521 kubeadm.go:934] updating node { 192.169.0.13 8443 v1.30.3 docker true true} ...
	I0805 16:37:37.146784    5521 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-985000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0805 16:37:37.146863    5521 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0805 16:37:37.182908    5521 command_runner.go:130] > cgroupfs
	I0805 16:37:37.183498    5521 cni.go:84] Creating CNI manager for ""
	I0805 16:37:37.183509    5521 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0805 16:37:37.183518    5521 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0805 16:37:37.183536    5521 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.13 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-985000 NodeName:multinode-985000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0805 16:37:37.183619    5521 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-985000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0805 16:37:37.183677    5521 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0805 16:37:37.192063    5521 command_runner.go:130] > kubeadm
	I0805 16:37:37.192073    5521 command_runner.go:130] > kubectl
	I0805 16:37:37.192078    5521 command_runner.go:130] > kubelet
	I0805 16:37:37.192202    5521 binaries.go:44] Found k8s binaries, skipping transfer
	I0805 16:37:37.192247    5521 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0805 16:37:37.200175    5521 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0805 16:37:37.213737    5521 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0805 16:37:37.227101    5521 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0805 16:37:37.240845    5521 ssh_runner.go:195] Run: grep 192.169.0.13	control-plane.minikube.internal$ /etc/hosts
	I0805 16:37:37.243830    5521 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0805 16:37:37.253870    5521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:37:37.350271    5521 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:37:37.365726    5521 certs.go:68] Setting up /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000 for IP: 192.169.0.13
	I0805 16:37:37.365744    5521 certs.go:194] generating shared ca certs ...
	I0805 16:37:37.365760    5521 certs.go:226] acquiring lock for ca certs: {Name:mkb83e058d89c7d4e66f4136f377a3c305b13735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:37:37.366000    5521 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key
	I0805 16:37:37.366088    5521 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key
	I0805 16:37:37.366102    5521 certs.go:256] generating profile certs ...
	I0805 16:37:37.366219    5521 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key
	I0805 16:37:37.366302    5521 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key.5b7978ec
	I0805 16:37:37.366434    5521 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key
	I0805 16:37:37.366447    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0805 16:37:37.366477    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0805 16:37:37.366498    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0805 16:37:37.366518    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0805 16:37:37.366537    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0805 16:37:37.366569    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0805 16:37:37.366600    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0805 16:37:37.366630    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0805 16:37:37.366732    5521 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem (1338 bytes)
	W0805 16:37:37.366808    5521 certs.go:480] ignoring /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678_empty.pem, impossibly tiny 0 bytes
	I0805 16:37:37.366821    5521 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem (1679 bytes)
	I0805 16:37:37.366859    5521 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem (1082 bytes)
	I0805 16:37:37.366891    5521 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem (1123 bytes)
	I0805 16:37:37.366923    5521 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem (1675 bytes)
	I0805 16:37:37.366996    5521 certs.go:484] found cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:37:37.367034    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:37:37.367064    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem -> /usr/share/ca-certificates/1678.pem
	I0805 16:37:37.367086    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /usr/share/ca-certificates/16782.pem
	I0805 16:37:37.367546    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0805 16:37:37.395681    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0805 16:37:37.414513    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0805 16:37:37.433690    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0805 16:37:37.452500    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0805 16:37:37.472109    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0805 16:37:37.491753    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0805 16:37:37.511029    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0805 16:37:37.530071    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0805 16:37:37.549206    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/1678.pem --> /usr/share/ca-certificates/1678.pem (1338 bytes)
	I0805 16:37:37.568348    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /usr/share/ca-certificates/16782.pem (1708 bytes)
	I0805 16:37:37.587345    5521 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0805 16:37:37.600856    5521 ssh_runner.go:195] Run: openssl version
	I0805 16:37:37.605037    5521 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0805 16:37:37.605082    5521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0805 16:37:37.614106    5521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:37:37.617312    5521 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:37:37.617414    5521 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug  5 22:48 /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:37:37.617448    5521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0805 16:37:37.621389    5521 command_runner.go:130] > b5213941
	I0805 16:37:37.621569    5521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0805 16:37:37.630682    5521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1678.pem && ln -fs /usr/share/ca-certificates/1678.pem /etc/ssl/certs/1678.pem"
	I0805 16:37:37.639868    5521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1678.pem
	I0805 16:37:37.643124    5521 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Aug  5 22:58 /usr/share/ca-certificates/1678.pem
	I0805 16:37:37.643203    5521 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug  5 22:58 /usr/share/ca-certificates/1678.pem
	I0805 16:37:37.643234    5521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1678.pem
	I0805 16:37:37.647330    5521 command_runner.go:130] > 51391683
	I0805 16:37:37.647529    5521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1678.pem /etc/ssl/certs/51391683.0"
	I0805 16:37:37.656868    5521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16782.pem && ln -fs /usr/share/ca-certificates/16782.pem /etc/ssl/certs/16782.pem"
	I0805 16:37:37.665981    5521 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16782.pem
	I0805 16:37:37.669370    5521 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Aug  5 22:58 /usr/share/ca-certificates/16782.pem
	I0805 16:37:37.669486    5521 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug  5 22:58 /usr/share/ca-certificates/16782.pem
	I0805 16:37:37.669522    5521 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16782.pem
	I0805 16:37:37.673595    5521 command_runner.go:130] > 3ec20f2e
	I0805 16:37:37.673823    5521 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16782.pem /etc/ssl/certs/3ec20f2e.0"
	I0805 16:37:37.683082    5521 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 16:37:37.686344    5521 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0805 16:37:37.686356    5521 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0805 16:37:37.686361    5521 command_runner.go:130] > Device: 253,1	Inode: 3149128     Links: 1
	I0805 16:37:37.686366    5521 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0805 16:37:37.686371    5521 command_runner.go:130] > Access: 2024-08-05 23:20:58.401066212 +0000
	I0805 16:37:37.686375    5521 command_runner.go:130] > Modify: 2024-08-05 23:20:58.401066212 +0000
	I0805 16:37:37.686399    5521 command_runner.go:130] > Change: 2024-08-05 23:20:58.401066212 +0000
	I0805 16:37:37.686409    5521 command_runner.go:130] >  Birth: 2024-08-05 23:20:58.401066212 +0000
	I0805 16:37:37.686482    5521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0805 16:37:37.690751    5521 command_runner.go:130] > Certificate will not expire
	I0805 16:37:37.690873    5521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0805 16:37:37.695013    5521 command_runner.go:130] > Certificate will not expire
	I0805 16:37:37.695212    5521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0805 16:37:37.700369    5521 command_runner.go:130] > Certificate will not expire
	I0805 16:37:37.700476    5521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0805 16:37:37.704551    5521 command_runner.go:130] > Certificate will not expire
	I0805 16:37:37.704708    5521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0805 16:37:37.708755    5521 command_runner.go:130] > Certificate will not expire
	I0805 16:37:37.708896    5521 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0805 16:37:37.713109    5521 command_runner.go:130] > Certificate will not expire
	I0805 16:37:37.713257    5521 kubeadm.go:392] StartCluster: {Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:192.169.0.15 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns
:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:37:37.713368    5521 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 16:37:37.727282    5521 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0805 16:37:37.735614    5521 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0805 16:37:37.735623    5521 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0805 16:37:37.735628    5521 command_runner.go:130] > /var/lib/minikube/etcd:
	I0805 16:37:37.735631    5521 command_runner.go:130] > member
	I0805 16:37:37.735761    5521 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0805 16:37:37.735771    5521 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0805 16:37:37.735817    5521 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0805 16:37:37.743915    5521 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0805 16:37:37.744222    5521 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-985000" does not appear in /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:37:37.744310    5521 kubeconfig.go:62] /Users/jenkins/minikube-integration/19373-1122/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-985000" cluster setting kubeconfig missing "multinode-985000" context setting]
	I0805 16:37:37.744520    5521 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/kubeconfig: {Name:mk2a0d8b4d330b3c26432fc65d015ddf98a9cc93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:37:37.745178    5521 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:37:37.745371    5521 kapi.go:59] client config for multinode-985000: &rest.Config{Host:"https://192.169.0.13:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/client.key", CAFile:"/Users/jenkins/minikube-integration/19373-1122/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xa6d2060), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0805 16:37:37.745697    5521 cert_rotation.go:137] Starting client certificate rotation controller
	I0805 16:37:37.745867    5521 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0805 16:37:37.753787    5521 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.13
	I0805 16:37:37.753807    5521 kubeadm.go:1160] stopping kube-system containers ...
	I0805 16:37:37.753864    5521 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0805 16:37:37.767689    5521 command_runner.go:130] > c9365aec3389
	I0805 16:37:37.767700    5521 command_runner.go:130] > 3d9fd612d0b1
	I0805 16:37:37.767703    5521 command_runner.go:130] > 2a8cd74365e9
	I0805 16:37:37.767706    5521 command_runner.go:130] > 35b9ac42edc0
	I0805 16:37:37.767710    5521 command_runner.go:130] > 724e5cfab0a2
	I0805 16:37:37.767713    5521 command_runner.go:130] > d58ca48f9f8b
	I0805 16:37:37.767717    5521 command_runner.go:130] > 65a1122097f0
	I0805 16:37:37.767720    5521 command_runner.go:130] > c91338eb0e13
	I0805 16:37:37.767729    5521 command_runner.go:130] > 792feba1a6f6
	I0805 16:37:37.767733    5521 command_runner.go:130] > 1fdd85b796ab
	I0805 16:37:37.767739    5521 command_runner.go:130] > d11865076c64
	I0805 16:37:37.767743    5521 command_runner.go:130] > 608878b33f35
	I0805 16:37:37.767746    5521 command_runner.go:130] > c86e04eb7823
	I0805 16:37:37.767749    5521 command_runner.go:130] > 55a20063845e
	I0805 16:37:37.767753    5521 command_runner.go:130] > b58900db5299
	I0805 16:37:37.767756    5521 command_runner.go:130] > 569788c2699f
	I0805 16:37:37.768462    5521 docker.go:483] Stopping containers: [c9365aec3389 3d9fd612d0b1 2a8cd74365e9 35b9ac42edc0 724e5cfab0a2 d58ca48f9f8b 65a1122097f0 c91338eb0e13 792feba1a6f6 1fdd85b796ab d11865076c64 608878b33f35 c86e04eb7823 55a20063845e b58900db5299 569788c2699f]
	I0805 16:37:37.768536    5521 ssh_runner.go:195] Run: docker stop c9365aec3389 3d9fd612d0b1 2a8cd74365e9 35b9ac42edc0 724e5cfab0a2 d58ca48f9f8b 65a1122097f0 c91338eb0e13 792feba1a6f6 1fdd85b796ab d11865076c64 608878b33f35 c86e04eb7823 55a20063845e b58900db5299 569788c2699f
	I0805 16:37:37.780204    5521 command_runner.go:130] > c9365aec3389
	I0805 16:37:37.781733    5521 command_runner.go:130] > 3d9fd612d0b1
	I0805 16:37:37.781870    5521 command_runner.go:130] > 2a8cd74365e9
	I0805 16:37:37.781981    5521 command_runner.go:130] > 35b9ac42edc0
	I0805 16:37:37.782219    5521 command_runner.go:130] > 724e5cfab0a2
	I0805 16:37:37.782404    5521 command_runner.go:130] > d58ca48f9f8b
	I0805 16:37:37.782493    5521 command_runner.go:130] > 65a1122097f0
	I0805 16:37:37.783962    5521 command_runner.go:130] > c91338eb0e13
	I0805 16:37:37.783968    5521 command_runner.go:130] > 792feba1a6f6
	I0805 16:37:37.783972    5521 command_runner.go:130] > 1fdd85b796ab
	I0805 16:37:37.783977    5521 command_runner.go:130] > d11865076c64
	I0805 16:37:37.784750    5521 command_runner.go:130] > 608878b33f35
	I0805 16:37:37.784758    5521 command_runner.go:130] > c86e04eb7823
	I0805 16:37:37.784761    5521 command_runner.go:130] > 55a20063845e
	I0805 16:37:37.784893    5521 command_runner.go:130] > b58900db5299
	I0805 16:37:37.784898    5521 command_runner.go:130] > 569788c2699f
	I0805 16:37:37.785811    5521 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0805 16:37:37.798972    5521 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0805 16:37:37.807138    5521 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0805 16:37:37.807150    5521 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0805 16:37:37.807156    5521 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0805 16:37:37.807162    5521 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 16:37:37.807183    5521 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0805 16:37:37.807189    5521 kubeadm.go:157] found existing configuration files:
	
	I0805 16:37:37.807236    5521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0805 16:37:37.815004    5521 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 16:37:37.815022    5521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0805 16:37:37.815068    5521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0805 16:37:37.823210    5521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0805 16:37:37.831025    5521 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 16:37:37.831041    5521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0805 16:37:37.831080    5521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0805 16:37:37.839362    5521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0805 16:37:37.847024    5521 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 16:37:37.847043    5521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0805 16:37:37.847077    5521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0805 16:37:37.855156    5521 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0805 16:37:37.862975    5521 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 16:37:37.862994    5521 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0805 16:37:37.863026    5521 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0805 16:37:37.871334    5521 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0805 16:37:37.879543    5521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:37:37.943566    5521 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0805 16:37:37.943663    5521 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0805 16:37:37.943824    5521 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0805 16:37:37.943956    5521 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0805 16:37:37.944158    5521 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0805 16:37:37.944374    5521 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0805 16:37:37.944697    5521 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0805 16:37:37.944812    5521 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0805 16:37:37.945011    5521 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0805 16:37:37.945077    5521 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0805 16:37:37.945285    5521 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0805 16:37:37.946228    5521 command_runner.go:130] > [certs] Using the existing "sa" key
	I0805 16:37:37.946304    5521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:37:39.167358    5521 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0805 16:37:39.167371    5521 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0805 16:37:39.167376    5521 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0805 16:37:39.167380    5521 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0805 16:37:39.167385    5521 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0805 16:37:39.167390    5521 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0805 16:37:39.167425    5521 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.221104057s)
	I0805 16:37:39.167438    5521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:37:39.219662    5521 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0805 16:37:39.220354    5521 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0805 16:37:39.220389    5521 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0805 16:37:39.339247    5521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:37:39.389550    5521 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0805 16:37:39.389565    5521 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0805 16:37:39.391233    5521 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0805 16:37:39.391757    5521 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0805 16:37:39.393094    5521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:37:39.451609    5521 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0805 16:37:39.461516    5521 api_server.go:52] waiting for apiserver process to appear ...
	I0805 16:37:39.461580    5521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:37:39.963685    5521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:37:40.462977    5521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:37:40.475006    5521 command_runner.go:130] > 1713
	I0805 16:37:40.475163    5521 api_server.go:72] duration metric: took 1.013654502s to wait for apiserver process to appear ...
	I0805 16:37:40.475173    5521 api_server.go:88] waiting for apiserver healthz status ...
	I0805 16:37:40.475189    5521 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:37:42.515953    5521 api_server.go:279] https://192.169.0.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0805 16:37:42.515968    5521 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0805 16:37:42.515976    5521 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:37:42.561960    5521 api_server.go:279] https://192.169.0.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 16:37:42.561978    5521 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 16:37:42.975764    5521 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:37:42.980706    5521 api_server.go:279] https://192.169.0.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 16:37:42.980725    5521 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 16:37:43.476837    5521 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:37:43.480708    5521 api_server.go:279] https://192.169.0.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0805 16:37:43.480721    5521 api_server.go:103] status: https://192.169.0.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0805 16:37:43.976652    5521 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:37:43.982020    5521 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0805 16:37:43.982084    5521 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I0805 16:37:43.982089    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:43.982096    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:43.982100    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:43.991478    5521 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0805 16:37:43.991491    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:43.991496    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:43.991499    5521 round_trippers.go:580]     Content-Length: 263
	I0805 16:37:43.991501    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:44 GMT
	I0805 16:37:43.991503    5521 round_trippers.go:580]     Audit-Id: c8ad866d-278d-4a88-b577-2337c27f176f
	I0805 16:37:43.991506    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:43.991508    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:43.991511    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:43.991536    5521 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0805 16:37:43.991580    5521 api_server.go:141] control plane version: v1.30.3
	I0805 16:37:43.991595    5521 api_server.go:131] duration metric: took 3.5164126s to wait for apiserver health ...
	I0805 16:37:43.991603    5521 cni.go:84] Creating CNI manager for ""
	I0805 16:37:43.991607    5521 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0805 16:37:44.014799    5521 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0805 16:37:44.035887    5521 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0805 16:37:44.053905    5521 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0805 16:37:44.053923    5521 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0805 16:37:44.053930    5521 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0805 16:37:44.053942    5521 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0805 16:37:44.053946    5521 command_runner.go:130] > Access: 2024-08-05 23:37:30.300677873 +0000
	I0805 16:37:44.053950    5521 command_runner.go:130] > Modify: 2024-07-29 16:10:03.000000000 +0000
	I0805 16:37:44.053955    5521 command_runner.go:130] > Change: 2024-08-05 23:37:28.153646920 +0000
	I0805 16:37:44.053958    5521 command_runner.go:130] >  Birth: -
	I0805 16:37:44.054010    5521 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0805 16:37:44.054018    5521 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0805 16:37:44.078089    5521 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0805 16:37:44.397453    5521 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0805 16:37:44.418847    5521 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0805 16:37:44.539954    5521 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0805 16:37:44.626597    5521 command_runner.go:130] > daemonset.apps/kindnet configured
	I0805 16:37:44.629867    5521 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 16:37:44.629936    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:37:44.629941    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.629947    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.629953    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.636693    5521 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0805 16:37:44.636713    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.636721    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:44 GMT
	I0805 16:37:44.636727    5521 round_trippers.go:580]     Audit-Id: 06b7f684-2b8a-4634-9922-7ad84cb7e6e5
	I0805 16:37:44.636731    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.636737    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.636741    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.636746    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.638935    5521 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1387"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1383","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 73649 chars]
	I0805 16:37:44.641759    5521 system_pods.go:59] 10 kube-system pods found
	I0805 16:37:44.641784    5521 system_pods.go:61] "coredns-7db6d8ff4d-fqtll" [4d8af129-475b-4185-8b0d-cbda67812964] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0805 16:37:44.641790    5521 system_pods.go:61] "etcd-multinode-985000" [8d7ca2d9-8c7b-41b9-a199-de6449107471] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0805 16:37:44.641795    5521 system_pods.go:61] "kindnet-5kfjr" [d68d8211-58f0-4a8f-904a-c6f9f530d58d] Running
	I0805 16:37:44.641799    5521 system_pods.go:61] "kindnet-tvtvg" [7dd4afe7-2a17-4298-823b-9955e43cfdb2] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0805 16:37:44.641804    5521 system_pods.go:61] "kube-apiserver-multinode-985000" [9be3378a-5fab-4907-baad-507918e714e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0805 16:37:44.641808    5521 system_pods.go:61] "kube-controller-manager-multinode-985000" [4ad64361-65de-4b0b-b2a3-07df18c2e603] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0805 16:37:44.641814    5521 system_pods.go:61] "kube-proxy-fwgw7" [3fb72e39-699d-4123-ae5e-e314a191d904] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0805 16:37:44.641818    5521 system_pods.go:61] "kube-proxy-s65dd" [25cd7fe5-8af2-4869-be11-1eb8c5a7ec01] Running
	I0805 16:37:44.641842    5521 system_pods.go:61] "kube-scheduler-multinode-985000" [5e23b1b7-e45d-4b43-831c-aa835c5e536d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0805 16:37:44.641847    5521 system_pods.go:61] "storage-provisioner" [72ec8458-5c62-43eb-9120-0146e6ccaf8f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0805 16:37:44.641852    5521 system_pods.go:74] duration metric: took 11.975799ms to wait for pod list to return data ...
	I0805 16:37:44.641861    5521 node_conditions.go:102] verifying NodePressure condition ...
	I0805 16:37:44.641901    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0805 16:37:44.641906    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.641911    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.641915    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.647494    5521 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0805 16:37:44.647507    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.647513    5521 round_trippers.go:580]     Audit-Id: 51276e8a-8d41-468a-8372-932c99dbe3e8
	I0805 16:37:44.647516    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.647518    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.647539    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.647544    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.647547    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:44 GMT
	I0805 16:37:44.647674    5521 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1388"},"items":[{"metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10158 chars]
	I0805 16:37:44.648158    5521 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:37:44.648172    5521 node_conditions.go:123] node cpu capacity is 2
	I0805 16:37:44.648182    5521 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:37:44.648186    5521 node_conditions.go:123] node cpu capacity is 2
	I0805 16:37:44.648190    5521 node_conditions.go:105] duration metric: took 6.325811ms to run NodePressure ...
	I0805 16:37:44.648205    5521 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0805 16:37:44.761435    5521 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0805 16:37:44.914201    5521 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0805 16:37:44.915254    5521 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0805 16:37:44.915318    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0805 16:37:44.915324    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.915331    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.915334    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.917615    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:44.917630    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.917640    5521 round_trippers.go:580]     Audit-Id: 84aaee6c-4475-49f2-8185-30cc2c755e1c
	I0805 16:37:44.917647    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.917651    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.917654    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.917657    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.917660    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:44.918012    5521 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1392"},"items":[{"metadata":{"name":"etcd-multinode-985000","namespace":"kube-system","uid":"8d7ca2d9-8c7b-41b9-a199-de6449107471","resourceVersion":"1380","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.mirror":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.seen":"2024-08-05T23:21:06.366030299Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 30917 chars]
	I0805 16:37:44.918731    5521 kubeadm.go:739] kubelet initialised
	I0805 16:37:44.918740    5521 kubeadm.go:740] duration metric: took 3.47538ms waiting for restarted kubelet to initialise ...
	I0805 16:37:44.918747    5521 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:37:44.918798    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:37:44.918804    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.918810    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.918815    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.920859    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:44.920866    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.920871    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.920873    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.920876    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.920878    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.920880    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:44.920883    5521 round_trippers.go:580]     Audit-Id: 51e54f33-9547-4470-b9ba-c080f1387d56
	I0805 16:37:44.921402    5521 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1392"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1383","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 73056 chars]
	I0805 16:37:44.922957    5521 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:44.922999    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:37:44.923004    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.923008    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.923011    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.924336    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:44.924346    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.924352    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.924355    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:44.924361    5521 round_trippers.go:580]     Audit-Id: e46b48bf-5949-4a1a-88ca-0532f6b9c8c3
	I0805 16:37:44.924364    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.924366    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.924368    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.924440    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1383","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0805 16:37:44.924683    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:44.924690    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.924696    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.924702    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.925980    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:44.925990    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.925998    5521 round_trippers.go:580]     Audit-Id: 28537896-265f-4611-9cfa-95ab32a9f5dc
	I0805 16:37:44.926004    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.926014    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.926018    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.926020    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.926023    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:44.926150    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:44.926329    5521 pod_ready.go:97] node "multinode-985000" hosting pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:44.926339    5521 pod_ready.go:81] duration metric: took 3.373593ms for pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace to be "Ready" ...
	E0805 16:37:44.926345    5521 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-985000" hosting pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:44.926352    5521 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:44.926380    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-985000
	I0805 16:37:44.926385    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.926390    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.926394    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.927346    5521 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:37:44.927354    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.927359    5521 round_trippers.go:580]     Audit-Id: 156a7215-933a-4e99-a1ed-5cbaef6005e2
	I0805 16:37:44.927362    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.927366    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.927371    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.927376    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.927381    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:44.927503    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-985000","namespace":"kube-system","uid":"8d7ca2d9-8c7b-41b9-a199-de6449107471","resourceVersion":"1380","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.mirror":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.seen":"2024-08-05T23:21:06.366030299Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0805 16:37:44.927709    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:44.927716    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.927722    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.927726    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.928738    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:44.928746    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.928753    5521 round_trippers.go:580]     Audit-Id: d454e0d3-91a1-437f-9641-9eb40301fb8f
	I0805 16:37:44.928758    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.928762    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.928767    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.928790    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.928796    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:44.928901    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:44.929068    5521 pod_ready.go:97] node "multinode-985000" hosting pod "etcd-multinode-985000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:44.929083    5521 pod_ready.go:81] duration metric: took 2.726167ms for pod "etcd-multinode-985000" in "kube-system" namespace to be "Ready" ...
	E0805 16:37:44.929089    5521 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-985000" hosting pod "etcd-multinode-985000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:44.929115    5521 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:44.929157    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-985000
	I0805 16:37:44.929163    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.929168    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.929172    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.930121    5521 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:37:44.930130    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.930134    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.930137    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.930139    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:44.930142    5521 round_trippers.go:580]     Audit-Id: 04a0388e-012b-4775-93ee-012b587c4ce5
	I0805 16:37:44.930153    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.930157    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.930304    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-985000","namespace":"kube-system","uid":"9be3378a-5fab-4907-baad-507918e714e4","resourceVersion":"1377","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"5908531d711118eab279d6b15448dc42","kubernetes.io/config.mirror":"5908531d711118eab279d6b15448dc42","kubernetes.io/config.seen":"2024-08-05T23:21:06.366030949Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8136 chars]
	I0805 16:37:44.930549    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:44.930558    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.930562    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.930567    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.931628    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:44.931636    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.931641    5521 round_trippers.go:580]     Audit-Id: 72e9cf52-6af7-45fd-a39e-e10ac17a459d
	I0805 16:37:44.931646    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.931652    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.931657    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.931660    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.931663    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:44.931772    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:44.931949    5521 pod_ready.go:97] node "multinode-985000" hosting pod "kube-apiserver-multinode-985000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:44.931958    5521 pod_ready.go:81] duration metric: took 2.833903ms for pod "kube-apiserver-multinode-985000" in "kube-system" namespace to be "Ready" ...
	E0805 16:37:44.931964    5521 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-985000" hosting pod "kube-apiserver-multinode-985000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:44.931970    5521 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:44.931996    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-985000
	I0805 16:37:44.932000    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:44.932006    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:44.932009    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:44.933363    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:44.933370    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:44.933375    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:44.933379    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:44.933383    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:44.933389    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:44.933392    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:44.933395    5521 round_trippers.go:580]     Audit-Id: 993e7085-2a06-4126-8cc5-0d75a41d047f
	I0805 16:37:44.933659    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-985000","namespace":"kube-system","uid":"4ad64361-65de-4b0b-b2a3-07df18c2e603","resourceVersion":"1378","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e41fb21b40cd2f3bd83b000891f6569","kubernetes.io/config.mirror":"8e41fb21b40cd2f3bd83b000891f6569","kubernetes.io/config.seen":"2024-08-05T23:21:06.366027130Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7727 chars]
	I0805 16:37:45.030087    5521 request.go:629] Waited for 96.18446ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:45.030215    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:45.030223    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:45.030234    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:45.030255    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:45.032395    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:45.032407    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:45.032414    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:45.032418    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:45.032423    5521 round_trippers.go:580]     Audit-Id: fd76f05c-aa0d-49d6-bc15-f6320e076edc
	I0805 16:37:45.032426    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:45.032428    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:45.032432    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:45.032710    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:45.032917    5521 pod_ready.go:97] node "multinode-985000" hosting pod "kube-controller-manager-multinode-985000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:45.032927    5521 pod_ready.go:81] duration metric: took 100.952173ms for pod "kube-controller-manager-multinode-985000" in "kube-system" namespace to be "Ready" ...
	E0805 16:37:45.032933    5521 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-985000" hosting pod "kube-controller-manager-multinode-985000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:45.032940    5521 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fwgw7" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:45.231074    5521 request.go:629] Waited for 198.067218ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwgw7
	I0805 16:37:45.231166    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwgw7
	I0805 16:37:45.231251    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:45.231259    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:45.231265    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:45.233956    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:45.233970    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:45.233977    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:45.234001    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:45.234024    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:45.234036    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:45.234040    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:45.234045    5521 round_trippers.go:580]     Audit-Id: a628a40a-acc3-4a40-8f85-01be7202c746
	I0805 16:37:45.234163    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fwgw7","generateName":"kube-proxy-","namespace":"kube-system","uid":"3fb72e39-699d-4123-ae5e-e314a191d904","resourceVersion":"1388","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8b6258e6-7b31-4600-b32b-4a269867c123","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8b6258e6-7b31-4600-b32b-4a269867c123\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0805 16:37:45.430145    5521 request.go:629] Waited for 195.640146ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:45.430221    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:45.430232    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:45.430243    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:45.430253    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:45.432534    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:45.432543    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:45.432549    5521 round_trippers.go:580]     Audit-Id: b3c72e32-7485-434a-9741-e61d4dbf854b
	I0805 16:37:45.432551    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:45.432554    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:45.432557    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:45.432560    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:45.432563    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:45.432975    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:45.433185    5521 pod_ready.go:97] node "multinode-985000" hosting pod "kube-proxy-fwgw7" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:45.433197    5521 pod_ready.go:81] duration metric: took 400.252263ms for pod "kube-proxy-fwgw7" in "kube-system" namespace to be "Ready" ...
	E0805 16:37:45.433203    5521 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-985000" hosting pod "kube-proxy-fwgw7" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:45.433211    5521 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-s65dd" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:45.632072    5521 request.go:629] Waited for 198.802376ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s65dd
	I0805 16:37:45.632244    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s65dd
	I0805 16:37:45.632255    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:45.632266    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:45.632272    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:45.635053    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:45.635075    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:45.635085    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:45.635094    5521 round_trippers.go:580]     Audit-Id: 57426407-9d2e-4f47-a704-559027932b6b
	I0805 16:37:45.635098    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:45.635145    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:45.635163    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:45.635171    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:45.635354    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s65dd","generateName":"kube-proxy-","namespace":"kube-system","uid":"25cd7fe5-8af2-4869-be11-1eb8c5a7ec01","resourceVersion":"1280","creationTimestamp":"2024-08-05T23:34:49Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8b6258e6-7b31-4600-b32b-4a269867c123","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:34:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8b6258e6-7b31-4600-b32b-4a269867c123\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0805 16:37:45.831233    5521 request.go:629] Waited for 195.519063ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000-m03
	I0805 16:37:45.831411    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000-m03
	I0805 16:37:45.831422    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:45.831433    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:45.831439    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:45.834136    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:45.834155    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:45.834163    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:45 GMT
	I0805 16:37:45.834183    5521 round_trippers.go:580]     Audit-Id: 27e71a24-1a24-4f27-b263-1184e4e136ef
	I0805 16:37:45.834194    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:45.834220    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:45.834227    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:45.834231    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:45.834346    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000-m03","uid":"9699bc94-d62c-4219-9310-93c890f4d182","resourceVersion":"1310","creationTimestamp":"2024-08-05T23:35:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_05T16_35_55_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:35:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3811 chars]
	I0805 16:37:45.834594    5521 pod_ready.go:92] pod "kube-proxy-s65dd" in "kube-system" namespace has status "Ready":"True"
	I0805 16:37:45.834607    5521 pod_ready.go:81] duration metric: took 401.389356ms for pod "kube-proxy-s65dd" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:45.834615    5521 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:46.030012    5521 request.go:629] Waited for 195.347838ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-985000
	I0805 16:37:46.030118    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-985000
	I0805 16:37:46.030282    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:46.030295    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:46.030302    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:46.033255    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:46.033269    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:46.033277    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:46.033282    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:46 GMT
	I0805 16:37:46.033295    5521 round_trippers.go:580]     Audit-Id: 5581d0b0-634a-4879-93db-f12183f9c6d1
	I0805 16:37:46.033299    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:46.033303    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:46.033307    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:46.033383    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-985000","namespace":"kube-system","uid":"5e23b1b7-e45d-4b43-831c-aa835c5e536d","resourceVersion":"1379","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d110ae14602908970c81c0d8a5c21147","kubernetes.io/config.mirror":"d110ae14602908970c81c0d8a5c21147","kubernetes.io/config.seen":"2024-08-05T23:21:06.366029633Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5439 chars]
	I0805 16:37:46.231588    5521 request.go:629] Waited for 197.896286ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:46.231711    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:46.231722    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:46.231734    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:46.231741    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:46.234296    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:46.234309    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:46.234327    5521 round_trippers.go:580]     Audit-Id: ed41d168-df4f-4577-a59b-11a4695f1e4d
	I0805 16:37:46.234334    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:46.234344    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:46.234348    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:46.234352    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:46.234357    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:46 GMT
	I0805 16:37:46.234726    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:46.235010    5521 pod_ready.go:97] node "multinode-985000" hosting pod "kube-scheduler-multinode-985000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:46.235036    5521 pod_ready.go:81] duration metric: took 400.401386ms for pod "kube-scheduler-multinode-985000" in "kube-system" namespace to be "Ready" ...
	E0805 16:37:46.235046    5521 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-985000" hosting pod "kube-scheduler-multinode-985000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-985000" has status "Ready":"False"
	I0805 16:37:46.235053    5521 pod_ready.go:38] duration metric: took 1.316290856s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:37:46.235072    5521 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0805 16:37:46.244782    5521 command_runner.go:130] > -16
	I0805 16:37:46.244799    5521 ops.go:34] apiserver oom_adj: -16
	I0805 16:37:46.244803    5521 kubeadm.go:597] duration metric: took 8.509016692s to restartPrimaryControlPlane
	I0805 16:37:46.244808    5521 kubeadm.go:394] duration metric: took 8.531546295s to StartCluster
	I0805 16:37:46.244817    5521 settings.go:142] acquiring lock: {Name:mk564a817a54ecf2aef16a4d2309e85208c0231f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:37:46.244907    5521 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:37:46.245297    5521 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/kubeconfig: {Name:mk2a0d8b4d330b3c26432fc65d015ddf98a9cc93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 16:37:46.245581    5521 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0805 16:37:46.245620    5521 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0805 16:37:46.245737    5521 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:37:46.265883    5521 out.go:177] * Verifying Kubernetes components...
	I0805 16:37:46.287681    5521 out.go:177] * Enabled addons: 
	I0805 16:37:46.308720    5521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:37:46.329784    5521 addons.go:510] duration metric: took 84.170663ms for enable addons: enabled=[]
	I0805 16:37:46.445431    5521 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0805 16:37:46.455908    5521 node_ready.go:35] waiting up to 6m0s for node "multinode-985000" to be "Ready" ...
	I0805 16:37:46.455963    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:46.455968    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:46.455974    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:46.455977    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:46.457387    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:46.457397    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:46.457405    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:46.457409    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:46 GMT
	I0805 16:37:46.457413    5521 round_trippers.go:580]     Audit-Id: bd4eda68-4863-49e7-bbfb-7ea21cb5ada5
	I0805 16:37:46.457415    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:46.457419    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:46.457421    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:46.457522    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:46.956358    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:46.956384    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:46.956396    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:46.956402    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:46.958818    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:46.958832    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:46.958842    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:46.958847    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:47 GMT
	I0805 16:37:46.958852    5521 round_trippers.go:580]     Audit-Id: b4463266-7add-4cc7-bedc-006651384d80
	I0805 16:37:46.958856    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:46.958860    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:46.958865    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:46.959158    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:47.456173    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:47.456189    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:47.456196    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:47.456199    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:47.457836    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:47.457847    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:47.457853    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:47 GMT
	I0805 16:37:47.457855    5521 round_trippers.go:580]     Audit-Id: b5690d8d-ba4d-4e8f-b3e4-326d910d1169
	I0805 16:37:47.457859    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:47.457863    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:47.457865    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:47.457868    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:47.458059    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:47.957596    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:47.957622    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:47.957635    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:47.957747    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:47.960401    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:47.960416    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:47.960423    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:48 GMT
	I0805 16:37:47.960427    5521 round_trippers.go:580]     Audit-Id: 02db3cf8-0261-4eb0-999f-e3bddfad9106
	I0805 16:37:47.960432    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:47.960436    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:47.960442    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:47.960446    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:47.960593    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:48.456064    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:48.456080    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:48.456087    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:48.456090    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:48.457742    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:48.457753    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:48.457758    5521 round_trippers.go:580]     Audit-Id: 70dbc308-f0bd-455d-8c1c-5afbe89a93d9
	I0805 16:37:48.457762    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:48.457764    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:48.457768    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:48.457772    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:48.457775    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:48 GMT
	I0805 16:37:48.457993    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:48.458188    5521 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:37:48.956783    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:48.956808    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:48.956843    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:48.956864    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:48.959167    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:48.959183    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:48.959193    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:48.959202    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:48.959208    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:48.959213    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:49 GMT
	I0805 16:37:48.959218    5521 round_trippers.go:580]     Audit-Id: 8fc7039f-2874-4170-a425-4689f2a4108b
	I0805 16:37:48.959223    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:48.959444    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:49.456474    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:49.456499    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:49.456511    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:49.456519    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:49.460713    5521 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0805 16:37:49.460739    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:49.460750    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:49.460761    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:49.460768    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:49 GMT
	I0805 16:37:49.460771    5521 round_trippers.go:580]     Audit-Id: ca04ca0c-3f72-4aff-8e7b-301f719bcbfc
	I0805 16:37:49.460775    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:49.460779    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:49.460857    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:49.957699    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:49.957728    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:49.957740    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:49.957835    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:49.960680    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:49.960698    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:49.960708    5521 round_trippers.go:580]     Audit-Id: 2de612c8-6d27-4ce3-b54a-c8ff3a4a639d
	I0805 16:37:49.960714    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:49.960722    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:49.960727    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:49.960734    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:49.960740    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:50 GMT
	I0805 16:37:49.960897    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:50.457100    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:50.457129    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:50.457142    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:50.457153    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:50.459627    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:50.459642    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:50.459649    5521 round_trippers.go:580]     Audit-Id: fafeb1d7-a055-47c0-988a-6b38c5651dfc
	I0805 16:37:50.459655    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:50.459660    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:50.459663    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:50.459666    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:50.459676    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:50 GMT
	I0805 16:37:50.459741    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:50.459999    5521 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:37:50.956078    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:50.956154    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:50.956163    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:50.956169    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:50.958070    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:50.958082    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:50.958087    5521 round_trippers.go:580]     Audit-Id: 87aa82fe-18d5-4cce-85d4-59e61ce26f17
	I0805 16:37:50.958091    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:50.958094    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:50.958097    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:50.958100    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:50.958102    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:51 GMT
	I0805 16:37:50.958160    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:51.457531    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:51.457557    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:51.457653    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:51.457663    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:51.460369    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:51.460384    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:51.460391    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:51.460396    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:51.460400    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:51.460404    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:51 GMT
	I0805 16:37:51.460431    5521 round_trippers.go:580]     Audit-Id: 9466c051-32fc-4ea5-bd73-ed0e7f687b57
	I0805 16:37:51.460450    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:51.460881    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:51.958224    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:51.958246    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:51.958258    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:51.958263    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:51.960788    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:51.960803    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:51.960811    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:51.960816    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:52 GMT
	I0805 16:37:51.960821    5521 round_trippers.go:580]     Audit-Id: af328a60-8cdc-4dd9-8f48-0c8f8247a6e1
	I0805 16:37:51.960827    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:51.960833    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:51.960836    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:51.960936    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:52.457362    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:52.457389    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:52.457401    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:52.457409    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:52.460067    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:52.460081    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:52.460088    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:52.460093    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:52.460097    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:52.460101    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:52.460104    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:52 GMT
	I0805 16:37:52.460107    5521 round_trippers.go:580]     Audit-Id: 7e825e88-a0c3-4ec8-9784-79cc2ced397e
	I0805 16:37:52.460238    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:52.460481    5521 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:37:52.956862    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:52.956888    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:52.956900    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:52.956906    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:52.959190    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:52.959207    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:52.959222    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:52.959230    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:52.959236    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:52.959241    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:53 GMT
	I0805 16:37:52.959245    5521 round_trippers.go:580]     Audit-Id: 1a9c796b-7598-4e9f-984e-7d71ef0ecc6b
	I0805 16:37:52.959248    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:52.959484    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:53.456240    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:53.456260    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:53.456268    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:53.456272    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:53.458257    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:53.458266    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:53.458272    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:53.458274    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:53 GMT
	I0805 16:37:53.458279    5521 round_trippers.go:580]     Audit-Id: 624a2604-a974-4849-aae7-2e1a5658d567
	I0805 16:37:53.458282    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:53.458287    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:53.458289    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:53.458511    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:53.957417    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:53.957442    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:53.957454    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:53.957460    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:53.960056    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:53.960069    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:53.960076    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:53.960080    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:53.960084    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:53.960088    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:53.960092    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:54 GMT
	I0805 16:37:53.960096    5521 round_trippers.go:580]     Audit-Id: 4faec3b3-a538-4ac5-b5df-a77a30b26579
	I0805 16:37:53.960283    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:54.456804    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:54.456830    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:54.456842    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:54.456850    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:54.459440    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:54.459455    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:54.459462    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:54.459467    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:54 GMT
	I0805 16:37:54.459471    5521 round_trippers.go:580]     Audit-Id: c4315559-7c37-420d-be82-f17839e46d45
	I0805 16:37:54.459475    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:54.459478    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:54.459483    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:54.459541    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1375","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0805 16:37:54.957878    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:54.957940    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:54.957948    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:54.957954    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:54.959305    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:54.959315    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:54.959320    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:54.959323    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:55 GMT
	I0805 16:37:54.959326    5521 round_trippers.go:580]     Audit-Id: b65ad43a-738a-45c5-8d88-879d1015f894
	I0805 16:37:54.959328    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:54.959331    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:54.959334    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:54.959389    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1479","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5422 chars]
	I0805 16:37:54.959586    5521 node_ready.go:53] node "multinode-985000" has status "Ready":"False"
	I0805 16:37:55.456090    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:55.456116    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:55.456128    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:55.456169    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:55.458752    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:55.458766    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:55.458773    5521 round_trippers.go:580]     Audit-Id: 616d546e-47b3-4c39-a1cf-a7bc7ca58bf7
	I0805 16:37:55.458777    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:55.458782    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:55.458785    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:55.458790    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:55.458793    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:55 GMT
	I0805 16:37:55.459013    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1493","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0805 16:37:55.956768    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:55.956795    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:55.956807    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:55.956815    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:55.959573    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:55.959589    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:55.959598    5521 round_trippers.go:580]     Audit-Id: a21b3b8d-1df5-4728-80b8-f92ed173fb09
	I0805 16:37:55.959602    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:55.959606    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:55.959611    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:55.959615    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:55.959619    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:56 GMT
	I0805 16:37:55.959715    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1493","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0805 16:37:56.456636    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:56.456739    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:56.456753    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:56.456759    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:56.458839    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:56.458851    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:56.458859    5521 round_trippers.go:580]     Audit-Id: b8671d44-80ca-458b-b1a7-50f5ad978f8f
	I0805 16:37:56.458864    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:56.458870    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:56.458874    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:56.458878    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:56.458881    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:56 GMT
	I0805 16:37:56.458982    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1493","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0805 16:37:56.956321    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:56.956347    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:56.956363    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:56.956372    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:56.958919    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:56.958932    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:56.958939    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:56.958944    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:56.958948    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:57 GMT
	I0805 16:37:56.958952    5521 round_trippers.go:580]     Audit-Id: 4f4bb43a-a081-437b-8ed2-cbdb66346756
	I0805 16:37:56.958958    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:56.958961    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:56.959161    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1493","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0805 16:37:57.456800    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:57.456815    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:57.456821    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:57.456825    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:57.458252    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:57.458262    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:57.458266    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:57.458270    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:57 GMT
	I0805 16:37:57.458273    5521 round_trippers.go:580]     Audit-Id: f407e253-302d-4f95-b5a4-ba92b556127a
	I0805 16:37:57.458276    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:57.458278    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:57.458281    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:57.458508    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:57.458703    5521 node_ready.go:49] node "multinode-985000" has status "Ready":"True"
	I0805 16:37:57.458716    5521 node_ready.go:38] duration metric: took 11.002775889s for node "multinode-985000" to be "Ready" ...
	I0805 16:37:57.458723    5521 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:37:57.458755    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:37:57.458761    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:57.458766    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:57.458770    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:57.462079    5521 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:37:57.462091    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:57.462096    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:57.462099    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:57.462102    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:57.462105    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:57.462107    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:57 GMT
	I0805 16:37:57.462111    5521 round_trippers.go:580]     Audit-Id: c20c94e3-f664-43bb-99a2-b2fb3d7f9976
	I0805 16:37:57.463098    5521 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1502"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1383","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 72982 chars]
	I0805 16:37:57.464719    5521 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:57.464766    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:37:57.464771    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:57.464777    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:57.464781    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:57.468609    5521 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:37:57.468622    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:57.468660    5521 round_trippers.go:580]     Audit-Id: 9de6faa5-7a31-44a9-83bf-9ebccfd4a34c
	I0805 16:37:57.468668    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:57.468673    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:57.468677    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:57.468680    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:57.468683    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:57 GMT
	I0805 16:37:57.468940    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1383","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0805 16:37:57.469229    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:57.469236    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:57.469242    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:57.469246    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:57.472498    5521 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:37:57.472509    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:57.472515    5521 round_trippers.go:580]     Audit-Id: 4ff61667-289e-4440-93e2-be7d6d55b721
	I0805 16:37:57.472519    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:57.472522    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:57.472525    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:57.472529    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:57.472531    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:57 GMT
	I0805 16:37:57.472719    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:57.966220    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:37:57.966278    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:57.966296    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:57.966304    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:57.969173    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:57.969187    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:57.969194    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:57.969198    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:57.969202    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:57.969206    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:57.969210    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:58 GMT
	I0805 16:37:57.969214    5521 round_trippers.go:580]     Audit-Id: 9d8c78fc-82fd-4791-b979-ae013d775a53
	I0805 16:37:57.969286    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1383","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0805 16:37:57.969645    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:57.969655    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:57.969662    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:57.969665    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:57.971024    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:57.971035    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:57.971043    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:57.971057    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:57.971067    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:57.971072    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:58 GMT
	I0805 16:37:57.971078    5521 round_trippers.go:580]     Audit-Id: 1384bca3-9b68-4402-b310-399209a4314b
	I0805 16:37:57.971085    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:57.971227    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:58.465939    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:37:58.465967    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:58.465978    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:58.465984    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:58.468758    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:58.468774    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:58.468781    5521 round_trippers.go:580]     Audit-Id: 72df3ada-da8b-4478-8394-8e4440f54d0d
	I0805 16:37:58.468786    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:58.468790    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:58.468794    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:58.468797    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:58.468800    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:58 GMT
	I0805 16:37:58.469261    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1383","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0805 16:37:58.469660    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:58.469669    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:58.469678    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:58.469683    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:58.471092    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:58.471100    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:58.471106    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:58.471110    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:58.471113    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:58 GMT
	I0805 16:37:58.471116    5521 round_trippers.go:580]     Audit-Id: 422803bf-9df2-457f-baab-402da408f3ef
	I0805 16:37:58.471118    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:58.471121    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:58.471275    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:58.966614    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:37:58.966630    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:58.966638    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:58.966643    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:58.968744    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:58.968756    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:58.968764    5521 round_trippers.go:580]     Audit-Id: 3e47d6ce-e3a9-4db9-9176-cf25942d89b9
	I0805 16:37:58.968769    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:58.968773    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:58.968777    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:58.968779    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:58.968782    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:59 GMT
	I0805 16:37:58.969124    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1383","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0805 16:37:58.969515    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:58.969537    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:58.969561    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:58.969565    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:58.970905    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:58.970913    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:58.970918    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:58.970927    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:58.970932    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:58.970935    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:58.970938    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:59 GMT
	I0805 16:37:58.970940    5521 round_trippers.go:580]     Audit-Id: f5155c70-9046-4427-944c-248d4543ab46
	I0805 16:37:58.971032    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:59.465508    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:37:59.465521    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.465527    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.465530    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.468891    5521 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:37:59.468903    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.468908    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:59 GMT
	I0805 16:37:59.468912    5521 round_trippers.go:580]     Audit-Id: 04ed6578-9810-4fac-bbc6-2e95106ea7a2
	I0805 16:37:59.468914    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.468917    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.468920    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.468922    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.469308    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1383","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0805 16:37:59.469589    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:59.469595    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.469601    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.469604    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.471279    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:59.471287    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.471293    5521 round_trippers.go:580]     Audit-Id: 9ef82004-a4d2-4da7-8c13-f62c040183d9
	I0805 16:37:59.471296    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.471299    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.471301    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.471303    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.471306    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:37:59 GMT
	I0805 16:37:59.471417    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:59.471592    5521 pod_ready.go:102] pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace has status "Ready":"False"
	I0805 16:37:59.965187    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fqtll
	I0805 16:37:59.965206    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.965218    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.965223    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.967501    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:37:59.967516    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.967523    5521 round_trippers.go:580]     Audit-Id: 6aa85007-6ee0-4657-8e54-a4bb9dfb34ac
	I0805 16:37:59.967528    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.967548    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.967555    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.967559    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.967563    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:37:59.967804    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1520","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6784 chars]
	I0805 16:37:59.968187    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:59.968194    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.968200    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.968203    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.969359    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:59.969366    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.969373    5521 round_trippers.go:580]     Audit-Id: 47ab49d3-f2d9-42b4-9106-89187d49ce44
	I0805 16:37:59.969376    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.969378    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.969382    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.969385    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.969389    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:37:59.969574    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:59.969740    5521 pod_ready.go:92] pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace has status "Ready":"True"
	I0805 16:37:59.969749    5521 pod_ready.go:81] duration metric: took 2.505012595s for pod "coredns-7db6d8ff4d-fqtll" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:59.969756    5521 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:59.969784    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-985000
	I0805 16:37:59.969788    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.969793    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.969797    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.970714    5521 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:37:59.970723    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.970728    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.970731    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.970733    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.970736    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.970738    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:37:59.970740    5521 round_trippers.go:580]     Audit-Id: e43ae6e7-5ed0-48b6-a0a7-dfb77e057ed0
	I0805 16:37:59.970919    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-985000","namespace":"kube-system","uid":"8d7ca2d9-8c7b-41b9-a199-de6449107471","resourceVersion":"1506","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.13:2379","kubernetes.io/config.hash":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.mirror":"130f1fd2ee4ff0ecb65e58239795d0b6","kubernetes.io/config.seen":"2024-08-05T23:21:06.366030299Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6358 chars]
	I0805 16:37:59.971134    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:59.971141    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.971147    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.971150    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.972128    5521 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:37:59.972141    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.972148    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.972154    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.972158    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.972160    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:37:59.972163    5521 round_trippers.go:580]     Audit-Id: 5b17c3dc-a0a2-4c0d-aa7a-8999b87e3e64
	I0805 16:37:59.972187    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.972281    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:59.972443    5521 pod_ready.go:92] pod "etcd-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:37:59.972450    5521 pod_ready.go:81] duration metric: took 2.690084ms for pod "etcd-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:59.972459    5521 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:59.972487    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-985000
	I0805 16:37:59.972492    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.972497    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.972500    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.973486    5521 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:37:59.973494    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.973499    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.973504    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:37:59.973508    5521 round_trippers.go:580]     Audit-Id: 5bcb7226-eda8-4823-8b5c-25d9a2496fe7
	I0805 16:37:59.973514    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.973518    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.973522    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.973687    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-985000","namespace":"kube-system","uid":"9be3378a-5fab-4907-baad-507918e714e4","resourceVersion":"1498","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.13:8443","kubernetes.io/config.hash":"5908531d711118eab279d6b15448dc42","kubernetes.io/config.mirror":"5908531d711118eab279d6b15448dc42","kubernetes.io/config.seen":"2024-08-05T23:21:06.366030949Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7892 chars]
	I0805 16:37:59.973925    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:59.973931    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.973937    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.973941    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.974960    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:59.974978    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.974986    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.974990    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.974993    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.974996    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:37:59.975000    5521 round_trippers.go:580]     Audit-Id: 9e7c3601-1b94-462b-97ec-1a8afab1df7f
	I0805 16:37:59.975003    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.975129    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:59.975296    5521 pod_ready.go:92] pod "kube-apiserver-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:37:59.975303    5521 pod_ready.go:81] duration metric: took 2.839851ms for pod "kube-apiserver-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:59.975309    5521 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:59.975339    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-985000
	I0805 16:37:59.975343    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.975349    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.975352    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.976422    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:59.976452    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.976458    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.976467    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.976470    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:37:59.976472    5521 round_trippers.go:580]     Audit-Id: 512682ae-f4a9-4641-903b-89cfe7630d58
	I0805 16:37:59.976476    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.976478    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.976584    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-985000","namespace":"kube-system","uid":"4ad64361-65de-4b0b-b2a3-07df18c2e603","resourceVersion":"1494","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8e41fb21b40cd2f3bd83b000891f6569","kubernetes.io/config.mirror":"8e41fb21b40cd2f3bd83b000891f6569","kubernetes.io/config.seen":"2024-08-05T23:21:06.366027130Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7465 chars]
	I0805 16:37:59.976808    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:59.976815    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.976820    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.976824    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.977900    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:59.977908    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.977912    5521 round_trippers.go:580]     Audit-Id: 09ba5c21-e357-4918-93b4-ff1a00ece334
	I0805 16:37:59.977916    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.977919    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.977922    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.977925    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.977928    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:37:59.978095    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:59.978252    5521 pod_ready.go:92] pod "kube-controller-manager-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:37:59.978260    5521 pod_ready.go:81] duration metric: took 2.945375ms for pod "kube-controller-manager-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:59.978267    5521 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fwgw7" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:59.978292    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fwgw7
	I0805 16:37:59.978297    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.978313    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.978320    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.979354    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:37:59.979360    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.979364    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:37:59.979367    5521 round_trippers.go:580]     Audit-Id: d6e77621-e9d2-486b-8cc4-49ab45a5f053
	I0805 16:37:59.979373    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.979378    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.979382    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.979386    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.979584    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-fwgw7","generateName":"kube-proxy-","namespace":"kube-system","uid":"3fb72e39-699d-4123-ae5e-e314a191d904","resourceVersion":"1509","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8b6258e6-7b31-4600-b32b-4a269867c123","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8b6258e6-7b31-4600-b32b-4a269867c123\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0805 16:37:59.979798    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:37:59.979805    5521 round_trippers.go:469] Request Headers:
	I0805 16:37:59.979810    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:37:59.979815    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:37:59.980814    5521 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:37:59.980822    5521 round_trippers.go:577] Response Headers:
	I0805 16:37:59.980829    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:37:59.980835    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:37:59.980839    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:37:59.980842    5521 round_trippers.go:580]     Audit-Id: bf9dc5db-49ef-4e93-a9ad-d8ea6d952b22
	I0805 16:37:59.980845    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:37:59.980847    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:37:59.980963    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1500","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0805 16:37:59.981119    5521 pod_ready.go:92] pod "kube-proxy-fwgw7" in "kube-system" namespace has status "Ready":"True"
	I0805 16:37:59.981126    5521 pod_ready.go:81] duration metric: took 2.853579ms for pod "kube-proxy-fwgw7" in "kube-system" namespace to be "Ready" ...
	I0805 16:37:59.981131    5521 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s65dd" in "kube-system" namespace to be "Ready" ...
	I0805 16:38:00.165697    5521 request.go:629] Waited for 184.4763ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s65dd
	I0805 16:38:00.165754    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s65dd
	I0805 16:38:00.165763    5521 round_trippers.go:469] Request Headers:
	I0805 16:38:00.165776    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:38:00.165784    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:38:00.168520    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:38:00.168535    5521 round_trippers.go:577] Response Headers:
	I0805 16:38:00.168543    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:38:00.168547    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:38:00.168552    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:38:00.168556    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:38:00.168559    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:38:00.168564    5521 round_trippers.go:580]     Audit-Id: cb996198-c69f-41f3-9883-c0b1d86c0ef8
	I0805 16:38:00.168681    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s65dd","generateName":"kube-proxy-","namespace":"kube-system","uid":"25cd7fe5-8af2-4869-be11-1eb8c5a7ec01","resourceVersion":"1280","creationTimestamp":"2024-08-05T23:34:49Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"8b6258e6-7b31-4600-b32b-4a269867c123","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:34:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8b6258e6-7b31-4600-b32b-4a269867c123\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0805 16:38:00.366684    5521 request.go:629] Waited for 197.656042ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000-m03
	I0805 16:38:00.366816    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000-m03
	I0805 16:38:00.366827    5521 round_trippers.go:469] Request Headers:
	I0805 16:38:00.366839    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:38:00.366845    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:38:00.369434    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:38:00.369449    5521 round_trippers.go:577] Response Headers:
	I0805 16:38:00.369456    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:38:00.369461    5521 round_trippers.go:580]     Audit-Id: 8a485a3a-116c-4fd2-986e-0f95c466f2b6
	I0805 16:38:00.369464    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:38:00.369468    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:38:00.369472    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:38:00.369491    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:38:00.369671    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000-m03","uid":"9699bc94-d62c-4219-9310-93c890f4d182","resourceVersion":"1310","creationTimestamp":"2024-08-05T23:35:55Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_08_05T16_35_55_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:35:55Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3811 chars]
	I0805 16:38:00.369888    5521 pod_ready.go:92] pod "kube-proxy-s65dd" in "kube-system" namespace has status "Ready":"True"
	I0805 16:38:00.369900    5521 pod_ready.go:81] duration metric: took 388.763276ms for pod "kube-proxy-s65dd" in "kube-system" namespace to be "Ready" ...
	I0805 16:38:00.369909    5521 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:38:00.565911    5521 request.go:629] Waited for 195.966473ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-985000
	I0805 16:38:00.566005    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-985000
	I0805 16:38:00.566010    5521 round_trippers.go:469] Request Headers:
	I0805 16:38:00.566016    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:38:00.566021    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:38:00.567727    5521 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0805 16:38:00.567736    5521 round_trippers.go:577] Response Headers:
	I0805 16:38:00.567741    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:38:00.567744    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:38:00.567746    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:38:00.567750    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:38:00.567753    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:38:00.567756    5521 round_trippers.go:580]     Audit-Id: e82326e5-6b6c-4bbe-9e4b-0ddab6f947e6
	I0805 16:38:00.567921    5521 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-985000","namespace":"kube-system","uid":"5e23b1b7-e45d-4b43-831c-aa835c5e536d","resourceVersion":"1502","creationTimestamp":"2024-08-05T23:21:06Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"d110ae14602908970c81c0d8a5c21147","kubernetes.io/config.mirror":"d110ae14602908970c81c0d8a5c21147","kubernetes.io/config.seen":"2024-08-05T23:21:06.366029633Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5195 chars]
	I0805 16:38:00.765952    5521 request.go:629] Waited for 197.798951ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:38:00.766012    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes/multinode-985000
	I0805 16:38:00.766024    5521 round_trippers.go:469] Request Headers:
	I0805 16:38:00.766035    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:38:00.766043    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:38:00.768641    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:38:00.768656    5521 round_trippers.go:577] Response Headers:
	I0805 16:38:00.768663    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:38:00.768668    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:38:00.768672    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:38:00.768679    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:38:00.768686    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:38:00.768690    5521 round_trippers.go:580]     Audit-Id: 185ed8df-c8cf-4ff7-8566-ce38bafe88b6
	I0805 16:38:00.768965    5521 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1525","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-08-05T23:21:03Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0805 16:38:00.769214    5521 pod_ready.go:92] pod "kube-scheduler-multinode-985000" in "kube-system" namespace has status "Ready":"True"
	I0805 16:38:00.769227    5521 pod_ready.go:81] duration metric: took 399.310045ms for pod "kube-scheduler-multinode-985000" in "kube-system" namespace to be "Ready" ...
	I0805 16:38:00.769236    5521 pod_ready.go:38] duration metric: took 3.310501987s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0805 16:38:00.769251    5521 api_server.go:52] waiting for apiserver process to appear ...
	I0805 16:38:00.769314    5521 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:38:00.780856    5521 command_runner.go:130] > 1713
	I0805 16:38:00.780992    5521 api_server.go:72] duration metric: took 14.535377095s to wait for apiserver process to appear ...
	I0805 16:38:00.781000    5521 api_server.go:88] waiting for apiserver healthz status ...
	I0805 16:38:00.781009    5521 api_server.go:253] Checking apiserver healthz at https://192.169.0.13:8443/healthz ...
	I0805 16:38:00.784000    5521 api_server.go:279] https://192.169.0.13:8443/healthz returned 200:
	ok
	I0805 16:38:00.784029    5521 round_trippers.go:463] GET https://192.169.0.13:8443/version
	I0805 16:38:00.784034    5521 round_trippers.go:469] Request Headers:
	I0805 16:38:00.784041    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:38:00.784045    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:38:00.784553    5521 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0805 16:38:00.784561    5521 round_trippers.go:577] Response Headers:
	I0805 16:38:00.784567    5521 round_trippers.go:580]     Content-Length: 263
	I0805 16:38:00.784570    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:00 GMT
	I0805 16:38:00.784572    5521 round_trippers.go:580]     Audit-Id: 5f0639a4-edd4-4f06-9ffe-bc3569a1e001
	I0805 16:38:00.784575    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:38:00.784578    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:38:00.784582    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:38:00.784584    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:38:00.784592    5521 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0805 16:38:00.784614    5521 api_server.go:141] control plane version: v1.30.3
	I0805 16:38:00.784621    5521 api_server.go:131] duration metric: took 3.617958ms to wait for apiserver health ...
	I0805 16:38:00.784627    5521 system_pods.go:43] waiting for kube-system pods to appear ...
	I0805 16:38:00.965403    5521 request.go:629] Waited for 180.737038ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:38:00.965497    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:38:00.965511    5521 round_trippers.go:469] Request Headers:
	I0805 16:38:00.965523    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:38:00.965530    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:38:00.969409    5521 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:38:00.969427    5521 round_trippers.go:577] Response Headers:
	I0805 16:38:00.969435    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:38:00.969440    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:38:00.969467    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:38:00.969482    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:01 GMT
	I0805 16:38:00.969489    5521 round_trippers.go:580]     Audit-Id: 9df3ad2c-a16e-4582-8dab-0552f9f48e75
	I0805 16:38:00.969493    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:38:00.970371    5521 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1531"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1520","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 72029 chars]
	I0805 16:38:00.971896    5521 system_pods.go:59] 10 kube-system pods found
	I0805 16:38:00.971906    5521 system_pods.go:61] "coredns-7db6d8ff4d-fqtll" [4d8af129-475b-4185-8b0d-cbda67812964] Running
	I0805 16:38:00.971910    5521 system_pods.go:61] "etcd-multinode-985000" [8d7ca2d9-8c7b-41b9-a199-de6449107471] Running
	I0805 16:38:00.971912    5521 system_pods.go:61] "kindnet-5kfjr" [d68d8211-58f0-4a8f-904a-c6f9f530d58d] Running
	I0805 16:38:00.971915    5521 system_pods.go:61] "kindnet-tvtvg" [7dd4afe7-2a17-4298-823b-9955e43cfdb2] Running
	I0805 16:38:00.971917    5521 system_pods.go:61] "kube-apiserver-multinode-985000" [9be3378a-5fab-4907-baad-507918e714e4] Running
	I0805 16:38:00.971920    5521 system_pods.go:61] "kube-controller-manager-multinode-985000" [4ad64361-65de-4b0b-b2a3-07df18c2e603] Running
	I0805 16:38:00.971923    5521 system_pods.go:61] "kube-proxy-fwgw7" [3fb72e39-699d-4123-ae5e-e314a191d904] Running
	I0805 16:38:00.971926    5521 system_pods.go:61] "kube-proxy-s65dd" [25cd7fe5-8af2-4869-be11-1eb8c5a7ec01] Running
	I0805 16:38:00.971929    5521 system_pods.go:61] "kube-scheduler-multinode-985000" [5e23b1b7-e45d-4b43-831c-aa835c5e536d] Running
	I0805 16:38:00.971931    5521 system_pods.go:61] "storage-provisioner" [72ec8458-5c62-43eb-9120-0146e6ccaf8f] Running
	I0805 16:38:00.971935    5521 system_pods.go:74] duration metric: took 187.304764ms to wait for pod list to return data ...
	I0805 16:38:00.971941    5521 default_sa.go:34] waiting for default service account to be created ...
	I0805 16:38:01.166632    5521 request.go:629] Waited for 194.612281ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:38:01.166685    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/default/serviceaccounts
	I0805 16:38:01.166696    5521 round_trippers.go:469] Request Headers:
	I0805 16:38:01.166710    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:38:01.166717    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:38:01.169824    5521 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:38:01.169846    5521 round_trippers.go:577] Response Headers:
	I0805 16:38:01.169857    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:38:01.169864    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:38:01.169869    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:38:01.169872    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:38:01.169875    5521 round_trippers.go:580]     Content-Length: 262
	I0805 16:38:01.169881    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:01 GMT
	I0805 16:38:01.169885    5521 round_trippers.go:580]     Audit-Id: 596b84b0-d5e1-453f-9c6b-48a083c0f9d5
	I0805 16:38:01.169899    5521 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1531"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"b0626468-f73b-4e9b-8270-658495d43f4a","resourceVersion":"337","creationTimestamp":"2024-08-05T23:21:19Z"}}]}
	I0805 16:38:01.170038    5521 default_sa.go:45] found service account: "default"
	I0805 16:38:01.170050    5521 default_sa.go:55] duration metric: took 198.104201ms for default service account to be created ...
	I0805 16:38:01.170061    5521 system_pods.go:116] waiting for k8s-apps to be running ...
	I0805 16:38:01.365509    5521 request.go:629] Waited for 195.385608ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:38:01.365661    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/namespaces/kube-system/pods
	I0805 16:38:01.365673    5521 round_trippers.go:469] Request Headers:
	I0805 16:38:01.365684    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:38:01.365691    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:38:01.369380    5521 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0805 16:38:01.369395    5521 round_trippers.go:577] Response Headers:
	I0805 16:38:01.369401    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:38:01.369406    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:01 GMT
	I0805 16:38:01.369410    5521 round_trippers.go:580]     Audit-Id: 61bbab58-2729-4303-914c-2ce9a281d990
	I0805 16:38:01.369414    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:38:01.369419    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:38:01.369423    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:38:01.370558    5521 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1531"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fqtll","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4d8af129-475b-4185-8b0d-cbda67812964","resourceVersion":"1520","creationTimestamp":"2024-08-05T23:21:20Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-08-05T23:21:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d42b8dc4-6bfa-4b0c-97a3-753ff71f60bd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 72029 chars]
	I0805 16:38:01.372078    5521 system_pods.go:86] 10 kube-system pods found
	I0805 16:38:01.372087    5521 system_pods.go:89] "coredns-7db6d8ff4d-fqtll" [4d8af129-475b-4185-8b0d-cbda67812964] Running
	I0805 16:38:01.372091    5521 system_pods.go:89] "etcd-multinode-985000" [8d7ca2d9-8c7b-41b9-a199-de6449107471] Running
	I0805 16:38:01.372095    5521 system_pods.go:89] "kindnet-5kfjr" [d68d8211-58f0-4a8f-904a-c6f9f530d58d] Running
	I0805 16:38:01.372098    5521 system_pods.go:89] "kindnet-tvtvg" [7dd4afe7-2a17-4298-823b-9955e43cfdb2] Running
	I0805 16:38:01.372101    5521 system_pods.go:89] "kube-apiserver-multinode-985000" [9be3378a-5fab-4907-baad-507918e714e4] Running
	I0805 16:38:01.372104    5521 system_pods.go:89] "kube-controller-manager-multinode-985000" [4ad64361-65de-4b0b-b2a3-07df18c2e603] Running
	I0805 16:38:01.372108    5521 system_pods.go:89] "kube-proxy-fwgw7" [3fb72e39-699d-4123-ae5e-e314a191d904] Running
	I0805 16:38:01.372111    5521 system_pods.go:89] "kube-proxy-s65dd" [25cd7fe5-8af2-4869-be11-1eb8c5a7ec01] Running
	I0805 16:38:01.372114    5521 system_pods.go:89] "kube-scheduler-multinode-985000" [5e23b1b7-e45d-4b43-831c-aa835c5e536d] Running
	I0805 16:38:01.372117    5521 system_pods.go:89] "storage-provisioner" [72ec8458-5c62-43eb-9120-0146e6ccaf8f] Running
	I0805 16:38:01.372121    5521 system_pods.go:126] duration metric: took 202.055662ms to wait for k8s-apps to be running ...
	I0805 16:38:01.372129    5521 system_svc.go:44] waiting for kubelet service to be running ....
	I0805 16:38:01.372178    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:38:01.384196    5521 system_svc.go:56] duration metric: took 12.064518ms WaitForService to wait for kubelet
	I0805 16:38:01.384212    5521 kubeadm.go:582] duration metric: took 15.138595056s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:38:01.384224    5521 node_conditions.go:102] verifying NodePressure condition ...
	I0805 16:38:01.566320    5521 request.go:629] Waited for 182.003764ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.13:8443/api/v1/nodes
	I0805 16:38:01.566366    5521 round_trippers.go:463] GET https://192.169.0.13:8443/api/v1/nodes
	I0805 16:38:01.566373    5521 round_trippers.go:469] Request Headers:
	I0805 16:38:01.566385    5521 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0805 16:38:01.566391    5521 round_trippers.go:473]     Accept: application/json, */*
	I0805 16:38:01.569209    5521 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0805 16:38:01.569222    5521 round_trippers.go:577] Response Headers:
	I0805 16:38:01.569229    5521 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 68e44844-6696-468e-9787-a6c2936a1bac
	I0805 16:38:01.569238    5521 round_trippers.go:580]     Date: Mon, 05 Aug 2024 23:38:01 GMT
	I0805 16:38:01.569244    5521 round_trippers.go:580]     Audit-Id: c16ec0aa-cf96-486e-a79d-d457d64a2789
	I0805 16:38:01.569248    5521 round_trippers.go:580]     Cache-Control: no-cache, private
	I0805 16:38:01.569250    5521 round_trippers.go:580]     Content-Type: application/json
	I0805 16:38:01.569254    5521 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 9d030ebb-5aa9-44a5-be5e-1a9f68b3c9f0
	I0805 16:38:01.569365    5521 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1531"},"items":[{"metadata":{"name":"multinode-985000","uid":"f2173fc6-8a66-4801-b5d5-a962e695428e","resourceVersion":"1525","creationTimestamp":"2024-08-05T23:21:04Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-985000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a179f531dd2dbe55e0d6074abcbc378280f91bb4","minikube.k8s.io/name":"multinode-985000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_08_05T16_21_07_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10031 chars]
	I0805 16:38:01.569754    5521 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:38:01.569766    5521 node_conditions.go:123] node cpu capacity is 2
	I0805 16:38:01.569774    5521 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0805 16:38:01.569781    5521 node_conditions.go:123] node cpu capacity is 2
	I0805 16:38:01.569787    5521 node_conditions.go:105] duration metric: took 185.55857ms to run NodePressure ...
	I0805 16:38:01.569796    5521 start.go:241] waiting for startup goroutines ...
	I0805 16:38:01.569804    5521 start.go:246] waiting for cluster config update ...
	I0805 16:38:01.569812    5521 start.go:255] writing updated cluster config ...
	I0805 16:38:01.590862    5521 out.go:177] 
	I0805 16:38:01.612868    5521 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:38:01.612983    5521 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:38:01.635442    5521 out.go:177] * Starting "multinode-985000-m02" worker node in "multinode-985000" cluster
	I0805 16:38:01.677243    5521 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:38:01.677275    5521 cache.go:56] Caching tarball of preloaded images
	I0805 16:38:01.677441    5521 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:38:01.677459    5521 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:38:01.677582    5521 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:38:01.678499    5521 start.go:360] acquireMachinesLock for multinode-985000-m02: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:38:01.678607    5521 start.go:364] duration metric: took 81.884µs to acquireMachinesLock for "multinode-985000-m02"
	I0805 16:38:01.678635    5521 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:38:01.678643    5521 fix.go:54] fixHost starting: m02
	I0805 16:38:01.679008    5521 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:38:01.679028    5521 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:38:01.688188    5521 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53145
	I0805 16:38:01.688589    5521 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:38:01.688918    5521 main.go:141] libmachine: Using API Version  1
	I0805 16:38:01.688930    5521 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:38:01.689133    5521 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:38:01.689265    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:38:01.689361    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetState
	I0805 16:38:01.689448    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:38:01.689523    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 4678
	I0805 16:38:01.690467    5521 fix.go:112] recreateIfNeeded on multinode-985000-m02: state=Stopped err=<nil>
	I0805 16:38:01.690478    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:38:01.690482    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid 4678 missing from process table
	W0805 16:38:01.690569    5521 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:38:01.711256    5521 out.go:177] * Restarting existing hyperkit VM for "multinode-985000-m02" ...
	I0805 16:38:01.732476    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .Start
	I0805 16:38:01.732792    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:38:01.732823    5521 main.go:141] libmachine: (multinode-985000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid
	I0805 16:38:01.734619    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid 4678 missing from process table
	I0805 16:38:01.734647    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | pid 4678 is in state "Stopped"
	I0805 16:38:01.734664    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid...
	I0805 16:38:01.734965    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | Using UUID ab5b9c9f-9e28-4bc2-8fcd-b98fce011173
	I0805 16:38:01.762464    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | Generated MAC a6:1c:88:9c:44:3
	I0805 16:38:01.762484    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000
	I0805 16:38:01.762607    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a6900)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0805 16:38:01.762638    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a6900)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0805 16:38:01.762681    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "ab5b9c9f-9e28-4bc2-8fcd-b98fce011173", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/multinode-985000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage,/Users/j
enkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"}
	I0805 16:38:01.762732    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U ab5b9c9f-9e28-4bc2-8fcd-b98fce011173 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/multinode-985000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/mult
inode-985000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"
	I0805 16:38:01.762746    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:38:01.764220    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 DEBUG: hyperkit: Pid is 5546
	I0805 16:38:01.764724    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | Attempt 0
	I0805 16:38:01.764744    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:38:01.764814    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 5546
	I0805 16:38:01.766771    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | Searching for a6:1c:88:9c:44:3 in /var/db/dhcpd_leases ...
	I0805 16:38:01.766808    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I0805 16:38:01.766817    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b3b9}
	I0805 16:38:01.766827    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:8a:98:fe:93:40:f0 ID:1,8a:98:fe:93:40:f0 Lease:0x66b2b34c}
	I0805 16:38:01.766833    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b00c}
	I0805 16:38:01.766840    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | Found match: a6:1c:88:9c:44:3
	I0805 16:38:01.766846    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | IP: 192.169.0.14
	I0805 16:38:01.766898    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetConfigRaw
	I0805 16:38:01.767595    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:38:01.767783    5521 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:38:01.768260    5521 machine.go:94] provisionDockerMachine start ...
	I0805 16:38:01.768271    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:38:01.768389    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:01.768494    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:01.768587    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:01.768704    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:01.768800    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:01.768955    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:01.769112    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:38:01.769120    5521 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 16:38:01.772314    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:38:01.780646    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:38:01.781683    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:38:01.781725    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:38:01.781742    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:38:01.781754    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:01 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:38:02.165919    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:02 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:38:02.165934    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:02 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:38:02.281252    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:38:02.281273    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:38:02.281284    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:38:02.281293    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:02 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:38:02.282119    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:02 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:38:02.282130    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:02 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:38:07.861454    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:07 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:38:07.861538    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:07 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:38:07.861548    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:07 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:38:07.885114    5521 main.go:141] libmachine: (multinode-985000-m02) DBG | 2024/08/05 16:38:07 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:38:12.833107    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 16:38:12.833122    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:38:12.833275    5521 buildroot.go:166] provisioning hostname "multinode-985000-m02"
	I0805 16:38:12.833287    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:38:12.833379    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:12.833467    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:12.833553    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:12.833648    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:12.833745    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:12.833872    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:12.834012    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:38:12.834021    5521 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-985000-m02 && echo "multinode-985000-m02" | sudo tee /etc/hostname
	I0805 16:38:12.899963    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-985000-m02
	
	I0805 16:38:12.899978    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:12.900133    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:12.900233    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:12.900332    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:12.900419    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:12.900559    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:12.900721    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:38:12.900732    5521 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-985000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-985000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-985000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:38:12.963291    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:38:12.963306    5521 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:38:12.963316    5521 buildroot.go:174] setting up certificates
	I0805 16:38:12.963325    5521 provision.go:84] configureAuth start
	I0805 16:38:12.963332    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetMachineName
	I0805 16:38:12.963463    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:38:12.963563    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:12.963644    5521 provision.go:143] copyHostCerts
	I0805 16:38:12.963672    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:38:12.963719    5521 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:38:12.963724    5521 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:38:12.963846    5521 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:38:12.964058    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:38:12.964088    5521 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:38:12.964093    5521 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:38:12.964171    5521 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:38:12.964327    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:38:12.964357    5521 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:38:12.964362    5521 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:38:12.964431    5521 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:38:12.964609    5521 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.multinode-985000-m02 san=[127.0.0.1 192.169.0.14 localhost minikube multinode-985000-m02]
	I0805 16:38:13.029718    5521 provision.go:177] copyRemoteCerts
	I0805 16:38:13.029767    5521 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:38:13.029782    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:13.029926    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:13.030013    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:13.030100    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:13.030195    5521 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:38:13.063868    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:38:13.063938    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0805 16:38:13.083721    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:38:13.083789    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:38:13.103391    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:38:13.103455    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0805 16:38:13.123247    5521 provision.go:87] duration metric: took 159.914588ms to configureAuth
	I0805 16:38:13.123259    5521 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:38:13.123427    5521 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:38:13.123441    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:38:13.123574    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:13.123660    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:13.123737    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:13.123827    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:13.123918    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:13.124026    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:13.124190    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:38:13.124198    5521 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:38:13.182171    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:38:13.182183    5521 buildroot.go:70] root file system type: tmpfs
	I0805 16:38:13.182268    5521 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:38:13.182279    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:13.182405    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:13.182503    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:13.182591    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:13.182683    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:13.182809    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:13.182954    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:38:13.183003    5521 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.13"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:38:13.248138    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.13
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:38:13.248155    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:13.248304    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:13.248405    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:13.248495    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:13.248573    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:13.248699    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:13.248870    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:38:13.248883    5521 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:38:14.774504    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:38:14.774518    5521 machine.go:97] duration metric: took 13.006233682s to provisionDockerMachine
	I0805 16:38:14.774527    5521 start.go:293] postStartSetup for "multinode-985000-m02" (driver="hyperkit")
	I0805 16:38:14.774535    5521 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:38:14.774546    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:38:14.774714    5521 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:38:14.774729    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:14.774827    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:14.774909    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:14.774998    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:14.775085    5521 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:38:14.816544    5521 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:38:14.820061    5521 command_runner.go:130] > NAME=Buildroot
	I0805 16:38:14.820070    5521 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0805 16:38:14.820074    5521 command_runner.go:130] > ID=buildroot
	I0805 16:38:14.820078    5521 command_runner.go:130] > VERSION_ID=2023.02.9
	I0805 16:38:14.820083    5521 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0805 16:38:14.820286    5521 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:38:14.820300    5521 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:38:14.820397    5521 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:38:14.820538    5521 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:38:14.820545    5521 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:38:14.820707    5521 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:38:14.833566    5521 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:38:14.861185    5521 start.go:296] duration metric: took 86.648603ms for postStartSetup
	I0805 16:38:14.861206    5521 fix.go:56] duration metric: took 13.182545662s for fixHost
	I0805 16:38:14.861238    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:14.861375    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:14.861467    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:14.861563    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:14.861652    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:14.861768    5521 main.go:141] libmachine: Using SSH client type: native
	I0805 16:38:14.861912    5521 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x922d0c0] 0x922fe20 <nil>  [] 0s} 192.169.0.14 22 <nil> <nil>}
	I0805 16:38:14.861919    5521 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0805 16:38:14.917690    5521 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722901094.828326920
	
	I0805 16:38:14.917701    5521 fix.go:216] guest clock: 1722901094.828326920
	I0805 16:38:14.917706    5521 fix.go:229] Guest: 2024-08-05 16:38:14.82832692 -0700 PDT Remote: 2024-08-05 16:38:14.861212 -0700 PDT m=+55.555905067 (delta=-32.88508ms)
	I0805 16:38:14.917716    5521 fix.go:200] guest clock delta is within tolerance: -32.88508ms
	I0805 16:38:14.917719    5521 start.go:83] releasing machines lock for "multinode-985000-m02", held for 13.239083998s
	I0805 16:38:14.917737    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:38:14.917864    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetIP
	I0805 16:38:14.938999    5521 out.go:177] * Found network options:
	I0805 16:38:14.996112    5521 out.go:177]   - NO_PROXY=192.169.0.13
	W0805 16:38:15.018259    5521 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:38:15.018300    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:38:15.019232    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:38:15.019568    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .DriverName
	I0805 16:38:15.019685    5521 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:38:15.019730    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	W0805 16:38:15.019879    5521 proxy.go:119] fail to check proxy env: Error ip not in block
	I0805 16:38:15.019923    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:15.019984    5521 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 16:38:15.020001    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHHostname
	I0805 16:38:15.020157    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHPort
	I0805 16:38:15.020211    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:15.020380    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHKeyPath
	I0805 16:38:15.020412    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:15.020614    5521 main.go:141] libmachine: (multinode-985000-m02) Calling .GetSSHUsername
	I0805 16:38:15.020625    5521 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:38:15.020777    5521 sshutil.go:53] new ssh client: &{IP:192.169.0.14 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000-m02/id_rsa Username:docker}
	I0805 16:38:15.053501    5521 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0805 16:38:15.053659    5521 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:38:15.053723    5521 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:38:15.098852    5521 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0805 16:38:15.098927    5521 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0805 16:38:15.098945    5521 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:38:15.098953    5521 start.go:495] detecting cgroup driver to use...
	I0805 16:38:15.099023    5521 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:38:15.113615    5521 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0805 16:38:15.113873    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:38:15.122000    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:38:15.130421    5521 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:38:15.130464    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:38:15.138622    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:38:15.146769    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:38:15.154881    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:38:15.162940    5521 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:38:15.171228    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:38:15.179545    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:38:15.187667    5521 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:38:15.196019    5521 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:38:15.203310    5521 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0805 16:38:15.203418    5521 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:38:15.210899    5521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:38:15.315364    5521 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:38:15.333178    5521 start.go:495] detecting cgroup driver to use...
	I0805 16:38:15.333246    5521 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:38:15.351847    5521 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0805 16:38:15.352028    5521 command_runner.go:130] > [Unit]
	I0805 16:38:15.352037    5521 command_runner.go:130] > Description=Docker Application Container Engine
	I0805 16:38:15.352041    5521 command_runner.go:130] > Documentation=https://docs.docker.com
	I0805 16:38:15.352046    5521 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0805 16:38:15.352050    5521 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0805 16:38:15.352057    5521 command_runner.go:130] > StartLimitBurst=3
	I0805 16:38:15.352063    5521 command_runner.go:130] > StartLimitIntervalSec=60
	I0805 16:38:15.352066    5521 command_runner.go:130] > [Service]
	I0805 16:38:15.352070    5521 command_runner.go:130] > Type=notify
	I0805 16:38:15.352078    5521 command_runner.go:130] > Restart=on-failure
	I0805 16:38:15.352084    5521 command_runner.go:130] > Environment=NO_PROXY=192.169.0.13
	I0805 16:38:15.352092    5521 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0805 16:38:15.352102    5521 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0805 16:38:15.352115    5521 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0805 16:38:15.352122    5521 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0805 16:38:15.352128    5521 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0805 16:38:15.352133    5521 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0805 16:38:15.352139    5521 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0805 16:38:15.352148    5521 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0805 16:38:15.352155    5521 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0805 16:38:15.352158    5521 command_runner.go:130] > ExecStart=
	I0805 16:38:15.352169    5521 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0805 16:38:15.352174    5521 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0805 16:38:15.352181    5521 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0805 16:38:15.352187    5521 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0805 16:38:15.352190    5521 command_runner.go:130] > LimitNOFILE=infinity
	I0805 16:38:15.352193    5521 command_runner.go:130] > LimitNPROC=infinity
	I0805 16:38:15.352197    5521 command_runner.go:130] > LimitCORE=infinity
	I0805 16:38:15.352202    5521 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0805 16:38:15.352209    5521 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0805 16:38:15.352215    5521 command_runner.go:130] > TasksMax=infinity
	I0805 16:38:15.352219    5521 command_runner.go:130] > TimeoutStartSec=0
	I0805 16:38:15.352224    5521 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0805 16:38:15.352229    5521 command_runner.go:130] > Delegate=yes
	I0805 16:38:15.352237    5521 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0805 16:38:15.352249    5521 command_runner.go:130] > KillMode=process
	I0805 16:38:15.352253    5521 command_runner.go:130] > [Install]
	I0805 16:38:15.352256    5521 command_runner.go:130] > WantedBy=multi-user.target
	I0805 16:38:15.352438    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:38:15.367477    5521 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:38:15.384493    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:38:15.395662    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:38:15.405888    5521 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:38:15.468063    5521 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:38:15.478558    5521 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:38:15.493596    5521 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0805 16:38:15.493658    5521 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:38:15.496390    5521 command_runner.go:130] > /usr/bin/cri-dockerd
	I0805 16:38:15.496655    5521 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:38:15.503652    5521 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:38:15.519898    5521 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:38:15.619700    5521 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:38:15.722257    5521 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:38:15.722278    5521 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:38:15.735967    5521 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:38:15.833114    5521 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:39:16.651467    5521 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0805 16:39:16.651483    5521 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0805 16:39:16.651496    5521 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.818287184s)
	I0805 16:39:16.651563    5521 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0805 16:39:16.661216    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0805 16:39:16.661228    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:13.420146905Z" level=info msg="Starting up"
	I0805 16:39:16.661236    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:13.420872507Z" level=info msg="containerd not running, starting managed containerd"
	I0805 16:39:16.661248    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:13.421358599Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=497
	I0805 16:39:16.661258    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.437602421Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0805 16:39:16.661268    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454632195Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0805 16:39:16.661294    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454680682Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0805 16:39:16.661303    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454724229Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0805 16:39:16.661313    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454738567Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0805 16:39:16.661323    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454771554Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:39:16.661333    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454832124Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:39:16.661358    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455014271Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:39:16.661368    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455053874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0805 16:39:16.661380    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455070229Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:39:16.661390    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455079145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0805 16:39:16.661401    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455109467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:39:16.661411    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455253015Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0805 16:39:16.661426    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.456861169Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:39:16.661438    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.456915956Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:39:16.661496    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457058253Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:39:16.661510    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457101847Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0805 16:39:16.661521    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457151686Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0805 16:39:16.661529    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457193291Z" level=info msg="metadata content store policy set" policy=shared
	I0805 16:39:16.661537    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457536850Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0805 16:39:16.661546    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457637715Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0805 16:39:16.661555    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457694331Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0805 16:39:16.661564    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457728855Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0805 16:39:16.661573    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457761160Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0805 16:39:16.661582    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457827388Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0805 16:39:16.661591    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458029068Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0805 16:39:16.661599    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458106036Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0805 16:39:16.661608    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458141669Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0805 16:39:16.661618    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458173056Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0805 16:39:16.661628    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458207694Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0805 16:39:16.661638    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458242036Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0805 16:39:16.661647    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458286329Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0805 16:39:16.661656    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458320625Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0805 16:39:16.661666    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458360911Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0805 16:39:16.661683    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458395522Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0805 16:39:16.661748    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458435461Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0805 16:39:16.661759    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458468994Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0805 16:39:16.661770    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458507655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661780    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458543528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661789    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458575409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661797    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458606090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661806    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458640753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661816    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458672527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661825    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458702141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661833    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458786564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661843    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458833470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661851    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458867942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661860    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458897905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661869    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458927275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661878    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458956835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661891    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458999344Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0805 16:39:16.661900    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459042185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661909    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459076838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.661918    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459117163Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0805 16:39:16.661928    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459171448Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0805 16:39:16.661939    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459206426Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0805 16:39:16.661948    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459236530Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0805 16:39:16.662025    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459266816Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0805 16:39:16.662039    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459297300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0805 16:39:16.662049    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459333043Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0805 16:39:16.662058    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459365111Z" level=info msg="NRI interface is disabled by configuration."
	I0805 16:39:16.662068    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459520257Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0805 16:39:16.662076    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459589097Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0805 16:39:16.662085    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459647415Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0805 16:39:16.662098    5521 command_runner.go:130] > Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459731249Z" level=info msg="containerd successfully booted in 0.022632s"
	I0805 16:39:16.662106    5521 command_runner.go:130] > Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.442507541Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0805 16:39:16.662113    5521 command_runner.go:130] > Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.446047233Z" level=info msg="Loading containers: start."
	I0805 16:39:16.662134    5521 command_runner.go:130] > Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.533905829Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0805 16:39:16.662147    5521 command_runner.go:130] > Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.600469950Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0805 16:39:16.662155    5521 command_runner.go:130] > Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.643991126Z" level=info msg="Loading containers: done."
	I0805 16:39:16.662165    5521 command_runner.go:130] > Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.660081921Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	I0805 16:39:16.662172    5521 command_runner.go:130] > Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.660224037Z" level=info msg="Daemon has completed initialization"
	I0805 16:39:16.662182    5521 command_runner.go:130] > Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.679152512Z" level=info msg="API listen on /var/run/docker.sock"
	I0805 16:39:16.662188    5521 command_runner.go:130] > Aug 05 23:38:14 multinode-985000-m02 systemd[1]: Started Docker Application Container Engine.
	I0805 16:39:16.662195    5521 command_runner.go:130] > Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.679221051Z" level=info msg="API listen on [::]:2376"
	I0805 16:39:16.662203    5521 command_runner.go:130] > Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.785720729Z" level=info msg="Processing signal 'terminated'"
	I0805 16:39:16.662211    5521 command_runner.go:130] > Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.786631200Z" level=info msg="Daemon shutdown complete"
	I0805 16:39:16.662222    5521 command_runner.go:130] > Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.786734889Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0805 16:39:16.662233    5521 command_runner.go:130] > Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.786818951Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	I0805 16:39:16.662243    5521 command_runner.go:130] > Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.786854490Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0805 16:39:16.662276    5521 command_runner.go:130] > Aug 05 23:38:15 multinode-985000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0805 16:39:16.662283    5521 command_runner.go:130] > Aug 05 23:38:16 multinode-985000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0805 16:39:16.662289    5521 command_runner.go:130] > Aug 05 23:38:16 multinode-985000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0805 16:39:16.662295    5521 command_runner.go:130] > Aug 05 23:38:16 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0805 16:39:16.662302    5521 command_runner.go:130] > Aug 05 23:38:16 multinode-985000-m02 dockerd[909]: time="2024-08-05T23:38:16.819558392Z" level=info msg="Starting up"
	I0805 16:39:16.662312    5521 command_runner.go:130] > Aug 05 23:39:16 multinode-985000-m02 dockerd[909]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0805 16:39:16.662323    5521 command_runner.go:130] > Aug 05 23:39:16 multinode-985000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0805 16:39:16.662329    5521 command_runner.go:130] > Aug 05 23:39:16 multinode-985000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0805 16:39:16.662335    5521 command_runner.go:130] > Aug 05 23:39:16 multinode-985000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0805 16:39:16.687918    5521 out.go:177] 
	W0805 16:39:16.708897    5521 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 05 23:38:13 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:38:13 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:13.420146905Z" level=info msg="Starting up"
	Aug 05 23:38:13 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:13.420872507Z" level=info msg="containerd not running, starting managed containerd"
	Aug 05 23:38:13 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:13.421358599Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=497
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.437602421Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454632195Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454680682Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454724229Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454738567Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454771554Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.454832124Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455014271Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455053874Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455070229Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455079145Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455109467Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.455253015Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.456861169Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.456915956Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457058253Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457101847Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457151686Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457193291Z" level=info msg="metadata content store policy set" policy=shared
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457536850Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457637715Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457694331Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457728855Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457761160Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.457827388Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458029068Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458106036Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458141669Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458173056Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458207694Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458242036Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458286329Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458320625Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458360911Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458395522Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458435461Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458468994Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458507655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458543528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458575409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458606090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458640753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458672527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458702141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458786564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458833470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458867942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458897905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458927275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458956835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.458999344Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459042185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459076838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459117163Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459171448Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459206426Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459236530Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459266816Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459297300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459333043Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459365111Z" level=info msg="NRI interface is disabled by configuration."
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459520257Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459589097Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459647415Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 05 23:38:13 multinode-985000-m02 dockerd[497]: time="2024-08-05T23:38:13.459731249Z" level=info msg="containerd successfully booted in 0.022632s"
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.442507541Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.446047233Z" level=info msg="Loading containers: start."
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.533905829Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.600469950Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.643991126Z" level=info msg="Loading containers: done."
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.660081921Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.660224037Z" level=info msg="Daemon has completed initialization"
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.679152512Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 05 23:38:14 multinode-985000-m02 systemd[1]: Started Docker Application Container Engine.
	Aug 05 23:38:14 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:14.679221051Z" level=info msg="API listen on [::]:2376"
	Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.785720729Z" level=info msg="Processing signal 'terminated'"
	Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.786631200Z" level=info msg="Daemon shutdown complete"
	Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.786734889Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.786818951Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Aug 05 23:38:15 multinode-985000-m02 dockerd[489]: time="2024-08-05T23:38:15.786854490Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 05 23:38:15 multinode-985000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Aug 05 23:38:16 multinode-985000-m02 systemd[1]: docker.service: Deactivated successfully.
	Aug 05 23:38:16 multinode-985000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Aug 05 23:38:16 multinode-985000-m02 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:38:16 multinode-985000-m02 dockerd[909]: time="2024-08-05T23:38:16.819558392Z" level=info msg="Starting up"
	Aug 05 23:39:16 multinode-985000-m02 dockerd[909]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 05 23:39:16 multinode-985000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 05 23:39:16 multinode-985000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 05 23:39:16 multinode-985000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0805 16:39:16.709036    5521 out.go:239] * 
	W0805 16:39:16.710224    5521 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:39:16.772583    5521 out.go:177] 
	
	
	==> Docker <==
	Aug 05 23:37:59 multinode-985000 dockerd[909]: time="2024-08-05T23:37:59.530647852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:37:59 multinode-985000 dockerd[909]: time="2024-08-05T23:37:59.530659237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:37:59 multinode-985000 dockerd[909]: time="2024-08-05T23:37:59.530721053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:37:59 multinode-985000 dockerd[909]: time="2024-08-05T23:37:59.587753877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:37:59 multinode-985000 dockerd[909]: time="2024-08-05T23:37:59.587813098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:37:59 multinode-985000 dockerd[909]: time="2024-08-05T23:37:59.587868053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:37:59 multinode-985000 dockerd[909]: time="2024-08-05T23:37:59.587933581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:37:59 multinode-985000 cri-dockerd[1158]: time="2024-08-05T23:37:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cd4b2b55e63d667baa0f6c6c9596a80de9a5e7e56f52b4f35c1a9f872b7103a5/resolv.conf as [nameserver 192.169.0.1]"
	Aug 05 23:37:59 multinode-985000 dockerd[909]: time="2024-08-05T23:37:59.809728237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:37:59 multinode-985000 dockerd[909]: time="2024-08-05T23:37:59.809773629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:37:59 multinode-985000 dockerd[909]: time="2024-08-05T23:37:59.809829513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:37:59 multinode-985000 dockerd[909]: time="2024-08-05T23:37:59.809895416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:37:59 multinode-985000 cri-dockerd[1158]: time="2024-08-05T23:37:59Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/658cceb77ae8c0f75cf82b1523a9419bd5b36531ba34b839ac50b6aefb77d462/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Aug 05 23:37:59 multinode-985000 dockerd[909]: time="2024-08-05T23:37:59.904825743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:37:59 multinode-985000 dockerd[909]: time="2024-08-05T23:37:59.904885148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:37:59 multinode-985000 dockerd[909]: time="2024-08-05T23:37:59.904912065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:37:59 multinode-985000 dockerd[909]: time="2024-08-05T23:37:59.905156720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:38:14 multinode-985000 dockerd[903]: time="2024-08-05T23:38:14.290421548Z" level=info msg="ignoring event" container=0d0f4c86d1e8c797cb0c58d08f505521679191138c65b7051df09ccf4e702a25 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 05 23:38:14 multinode-985000 dockerd[909]: time="2024-08-05T23:38:14.291138494Z" level=info msg="shim disconnected" id=0d0f4c86d1e8c797cb0c58d08f505521679191138c65b7051df09ccf4e702a25 namespace=moby
	Aug 05 23:38:14 multinode-985000 dockerd[909]: time="2024-08-05T23:38:14.291376058Z" level=warning msg="cleaning up after shim disconnected" id=0d0f4c86d1e8c797cb0c58d08f505521679191138c65b7051df09ccf4e702a25 namespace=moby
	Aug 05 23:38:14 multinode-985000 dockerd[909]: time="2024-08-05T23:38:14.291419423Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 05 23:38:27 multinode-985000 dockerd[909]: time="2024-08-05T23:38:27.687033437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 05 23:38:27 multinode-985000 dockerd[909]: time="2024-08-05T23:38:27.687615016Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 05 23:38:27 multinode-985000 dockerd[909]: time="2024-08-05T23:38:27.687656640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 05 23:38:27 multinode-985000 dockerd[909]: time="2024-08-05T23:38:27.687946254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f0f4bede55f3a       6e38f40d628db                                                                                         3 minutes ago       Running             storage-provisioner       2                   3dbf65ea93f78       storage-provisioner
	fb1f1e1ed4457       8c811b4aec35f                                                                                         3 minutes ago       Running             busybox                   1                   658cceb77ae8c       busybox-fc5497c4f-44k5g
	2141742da0666       cbb01a7bd410d                                                                                         3 minutes ago       Running             coredns                   1                   cd4b2b55e63d6       coredns-7db6d8ff4d-fqtll
	d5738d55fecd4       917d7814b9b5b                                                                                         4 minutes ago       Running             kindnet-cni               1                   0f87877cd7c1a       kindnet-tvtvg
	0d0f4c86d1e8c       6e38f40d628db                                                                                         4 minutes ago       Exited              storage-provisioner       1                   3dbf65ea93f78       storage-provisioner
	413cda260d217       55bb025d2cfa5                                                                                         4 minutes ago       Running             kube-proxy                1                   b802ec8e629da       kube-proxy-fwgw7
	ff391cbc1ee5d       3edc18e7b7672                                                                                         4 minutes ago       Running             kube-scheduler            1                   12292d1aa4843       kube-scheduler-multinode-985000
	ee05acb4726f8       3861cfcd7c04c                                                                                         4 minutes ago       Running             etcd                      1                   0b1913061cd3f       etcd-multinode-985000
	92bdde18e9bc2       1f6d574d502f3                                                                                         4 minutes ago       Running             kube-apiserver            1                   4f42c6fa501f4       kube-apiserver-multinode-985000
	b348fa62c4a57       76932a3b37d7e                                                                                         4 minutes ago       Running             kube-controller-manager   1                   3bf209dcf9a99       kube-controller-manager-multinode-985000
	0cbc162071e51       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Exited              busybox                   0                   abfb33d4f204d       busybox-fc5497c4f-44k5g
	c9365aec33892       cbb01a7bd410d                                                                                         20 minutes ago      Exited              coredns                   0                   35b9ac42edc06       coredns-7db6d8ff4d-fqtll
	724e5cfab0a27       kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3              20 minutes ago      Exited              kindnet-cni               0                   65a1122097f07       kindnet-tvtvg
	d58ca48f9f8b2       55bb025d2cfa5                                                                                         20 minutes ago      Exited              kube-proxy                0                   c91338eb0e138       kube-proxy-fwgw7
	792feba1a6f6b       3edc18e7b7672                                                                                         20 minutes ago      Exited              kube-scheduler            0                   c86e04eb7823b       kube-scheduler-multinode-985000
	1fdd85b796ab3       3861cfcd7c04c                                                                                         20 minutes ago      Exited              etcd                      0                   b58900db52990       etcd-multinode-985000
	d11865076c645       76932a3b37d7e                                                                                         20 minutes ago      Exited              kube-controller-manager   0                   55a20063845e3       kube-controller-manager-multinode-985000
	608878b33f358       1f6d574d502f3                                                                                         20 minutes ago      Exited              kube-apiserver            0                   569788c2699f1       kube-apiserver-multinode-985000
	
	
	==> coredns [2141742da066] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:55096 - 16258 "HINFO IN 3588705990584082194.7089874688342145824. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012628073s
	
	
	==> coredns [c9365aec3389] <==
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:57821 - 19682 "HINFO IN 7732396596932693360.4385804994640298901. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.014623104s
	[INFO] 10.244.0.3:44234 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136193s
	[INFO] 10.244.0.3:37423 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.058799401s
	[INFO] 10.244.0.3:57961 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.010090318s
	[INFO] 10.244.0.3:37799 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.012765436s
	[INFO] 10.244.0.3:46499 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000078364s
	[INFO] 10.244.0.3:42436 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011216992s
	[INFO] 10.244.0.3:35880 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000144767s
	[INFO] 10.244.0.3:39224 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104006s
	[INFO] 10.244.0.3:48536 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.013324615s
	[INFO] 10.244.0.3:55841 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000221823s
	[INFO] 10.244.0.3:46712 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000111417s
	[INFO] 10.244.0.3:51982 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000099744s
	[INFO] 10.244.0.3:55425 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000080184s
	[INFO] 10.244.0.3:58084 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000119904s
	[INFO] 10.244.0.3:57892 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049065s
	[INFO] 10.244.0.3:52329 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000049128s
	[INFO] 10.244.0.3:60384 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000083319s
	[INFO] 10.244.0.3:51923 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000058598s
	[INFO] 10.244.0.3:37985 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00007256s
	[INFO] 10.244.0.3:45792 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000071025s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-985000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-985000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a179f531dd2dbe55e0d6074abcbc378280f91bb4
	                    minikube.k8s.io/name=multinode-985000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_05T16_21_07_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 05 Aug 2024 23:21:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-985000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 05 Aug 2024 23:41:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 05 Aug 2024 23:37:57 +0000   Mon, 05 Aug 2024 23:21:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 05 Aug 2024 23:37:57 +0000   Mon, 05 Aug 2024 23:21:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 05 Aug 2024 23:37:57 +0000   Mon, 05 Aug 2024 23:21:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 05 Aug 2024 23:37:57 +0000   Mon, 05 Aug 2024 23:37:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.13
	  Hostname:    multinode-985000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 b981b6a36d124fcaadeb3cd3197bf53b
	  System UUID:                3ac6443b-0000-0000-898d-9b152fa64288
	  Boot ID:                    8bf7ffe6-c2c9-4868-8b47-da7da3d15cdf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.1
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-44k5g                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-7db6d8ff4d-fqtll                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 etcd-multinode-985000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 kindnet-tvtvg                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      20m
	  kube-system                 kube-apiserver-multinode-985000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-controller-manager-multinode-985000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-proxy-fwgw7                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-scheduler-multinode-985000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 20m                    kube-proxy       
	  Normal  Starting                 4m11s                  kube-proxy       
	  Normal  Starting                 20m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)      kubelet          Node multinode-985000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)      kubelet          Node multinode-985000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)      kubelet          Node multinode-985000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    20m                    kubelet          Node multinode-985000 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  20m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m                    kubelet          Node multinode-985000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     20m                    kubelet          Node multinode-985000 status is now: NodeHasSufficientPID
	  Normal  Starting                 20m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           20m                    node-controller  Node multinode-985000 event: Registered Node multinode-985000 in Controller
	  Normal  NodeReady                20m                    kubelet          Node multinode-985000 status is now: NodeReady
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m17s (x8 over 4m17s)  kubelet          Node multinode-985000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s (x8 over 4m17s)  kubelet          Node multinode-985000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s (x7 over 4m17s)  kubelet          Node multinode-985000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m1s                   node-controller  Node multinode-985000 event: Registered Node multinode-985000 in Controller
	
	
	==> dmesg <==
	[  +5.661439] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007055] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.766548] systemd-fstab-generator[126]: Ignoring "noauto" option for root device
	[  +2.232761] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000003] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.602536] systemd-fstab-generator[465]: Ignoring "noauto" option for root device
	[  +0.108699] systemd-fstab-generator[477]: Ignoring "noauto" option for root device
	[  +1.844656] systemd-fstab-generator[832]: Ignoring "noauto" option for root device
	[  +0.244366] systemd-fstab-generator[869]: Ignoring "noauto" option for root device
	[  +0.093826] systemd-fstab-generator[881]: Ignoring "noauto" option for root device
	[  +0.056002] kauditd_printk_skb: 123 callbacks suppressed
	[  +0.061114] systemd-fstab-generator[895]: Ignoring "noauto" option for root device
	[  +2.459899] systemd-fstab-generator[1111]: Ignoring "noauto" option for root device
	[  +0.103560] systemd-fstab-generator[1123]: Ignoring "noauto" option for root device
	[  +0.100329] systemd-fstab-generator[1135]: Ignoring "noauto" option for root device
	[  +0.122414] systemd-fstab-generator[1150]: Ignoring "noauto" option for root device
	[  +0.416040] systemd-fstab-generator[1279]: Ignoring "noauto" option for root device
	[  +1.958427] systemd-fstab-generator[1414]: Ignoring "noauto" option for root device
	[  +0.064860] kauditd_printk_skb: 180 callbacks suppressed
	[  +5.001373] kauditd_printk_skb: 90 callbacks suppressed
	[  +2.036850] systemd-fstab-generator[2247]: Ignoring "noauto" option for root device
	[  +8.657009] kauditd_printk_skb: 42 callbacks suppressed
	[Aug 5 23:38] kauditd_printk_skb: 16 callbacks suppressed
	
	
	==> etcd [1fdd85b796ab] <==
	{"level":"info","ts":"2024-08-05T23:21:02.852037Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:21:02.855611Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	{"level":"info","ts":"2024-08-05T23:21:02.856003Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.856059Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.85615Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:21:02.863221Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:21:02.86336Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T23:21:02.863406Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T23:21:02.864495Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-05T23:31:02.914901Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":684}
	{"level":"info","ts":"2024-08-05T23:31:02.918154Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":684,"took":"2.558785ms","hash":2682644219,"current-db-size-bytes":2088960,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2088960,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-08-05T23:31:02.918199Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2682644219,"revision":684,"compact-revision":-1}
	{"level":"info","ts":"2024-08-05T23:36:02.919565Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":925}
	{"level":"info","ts":"2024-08-05T23:36:02.920973Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":925,"took":"1.036284ms","hash":3918561037,"current-db-size-bytes":2088960,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1814528,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-08-05T23:36:02.921075Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3918561037,"revision":925,"compact-revision":684}
	{"level":"info","ts":"2024-08-05T23:37:11.447748Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-08-05T23:37:11.447778Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-985000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.13:2380"],"advertise-client-urls":["https://192.169.0.13:2379"]}
	{"level":"warn","ts":"2024-08-05T23:37:11.447827Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T23:37:11.447882Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T23:37:11.491519Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.13:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-08-05T23:37:11.491562Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.13:2379: use of closed network connection"}
	{"level":"info","ts":"2024-08-05T23:37:11.493311Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"e0290fa3161c5471","current-leader-member-id":"e0290fa3161c5471"}
	{"level":"info","ts":"2024-08-05T23:37:11.498118Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2024-08-05T23:37:11.498186Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2024-08-05T23:37:11.498193Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-985000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.13:2380"],"advertise-client-urls":["https://192.169.0.13:2379"]}
	
	
	==> etcd [ee05acb4726f] <==
	{"level":"info","ts":"2024-08-05T23:37:40.599067Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T23:37:40.599077Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-05T23:37:40.599334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 switched to configuration voters=(16152458731666035825)"}
	{"level":"info","ts":"2024-08-05T23:37:40.599394Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","added-peer-id":"e0290fa3161c5471","added-peer-peer-urls":["https://192.169.0.13:2380"]}
	{"level":"info","ts":"2024-08-05T23:37:40.59965Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"87b46e718846f146","local-member-id":"e0290fa3161c5471","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:37:40.599742Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-05T23:37:40.604814Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-05T23:37:40.605055Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"e0290fa3161c5471","initial-advertise-peer-urls":["https://192.169.0.13:2380"],"listen-peer-urls":["https://192.169.0.13:2380"],"advertise-client-urls":["https://192.169.0.13:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.13:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-05T23:37:40.605095Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-05T23:37:40.605211Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2024-08-05T23:37:40.605239Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.13:2380"}
	{"level":"info","ts":"2024-08-05T23:37:41.689469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 is starting a new election at term 2"}
	{"level":"info","ts":"2024-08-05T23:37:41.689514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-08-05T23:37:41.689535Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgPreVoteResp from e0290fa3161c5471 at term 2"}
	{"level":"info","ts":"2024-08-05T23:37:41.689547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became candidate at term 3"}
	{"level":"info","ts":"2024-08-05T23:37:41.689571Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 received MsgVoteResp from e0290fa3161c5471 at term 3"}
	{"level":"info","ts":"2024-08-05T23:37:41.68958Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e0290fa3161c5471 became leader at term 3"}
	{"level":"info","ts":"2024-08-05T23:37:41.689585Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e0290fa3161c5471 elected leader e0290fa3161c5471 at term 3"}
	{"level":"info","ts":"2024-08-05T23:37:41.690625Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e0290fa3161c5471","local-member-attributes":"{Name:multinode-985000 ClientURLs:[https://192.169.0.13:2379]}","request-path":"/0/members/e0290fa3161c5471/attributes","cluster-id":"87b46e718846f146","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-05T23:37:41.690781Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:37:41.690883Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-05T23:37:41.691356Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-05T23:37:41.691386Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-05T23:37:41.692361Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-05T23:37:41.700262Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.13:2379"}
	
	
	==> kernel <==
	 23:41:56 up 4 min,  0 users,  load average: 0.28, 0.18, 0.08
	Linux multinode-985000 5.10.207 #1 SMP Mon Jul 29 15:19:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [724e5cfab0a2] <==
	I0805 23:36:04.991992       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.169.0.15 Flags: [] Table: 0} 
	I0805 23:36:14.989579       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:36:14.989997       1 main.go:299] handling current node
	I0805 23:36:14.990198       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:36:14.990433       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:36:24.988684       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:36:24.988821       1 main.go:299] handling current node
	I0805 23:36:24.988872       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:36:24.988911       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:36:34.988817       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:36:34.988909       1 main.go:299] handling current node
	I0805 23:36:34.988935       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:36:34.988949       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:36:44.992669       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:36:44.992745       1 main.go:299] handling current node
	I0805 23:36:44.992779       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:36:44.992802       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:36:54.996793       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:36:54.996835       1 main.go:299] handling current node
	I0805 23:36:54.996848       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:36:54.996853       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:37:04.997759       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:37:04.997893       1 main.go:299] handling current node
	I0805 23:37:04.998013       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:37:04.998174       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [d5738d55fecd] <==
	I0805 23:40:55.472225       1 main.go:299] handling current node
	I0805 23:40:55.472239       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:40:55.472245       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:41:05.472335       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:41:05.472423       1 main.go:299] handling current node
	I0805 23:41:05.472458       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:41:05.472526       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:41:15.465563       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:41:15.465586       1 main.go:299] handling current node
	I0805 23:41:15.465596       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:41:15.465599       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:41:25.468198       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:41:25.468433       1 main.go:299] handling current node
	I0805 23:41:25.468473       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:41:25.468579       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:41:35.474562       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:41:35.474595       1 main.go:299] handling current node
	I0805 23:41:35.474610       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:41:35.474617       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:41:45.465571       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:41:45.465836       1 main.go:299] handling current node
	I0805 23:41:45.465887       1 main.go:295] Handling node with IPs: map[192.169.0.15:{}]
	I0805 23:41:45.465915       1 main.go:322] Node multinode-985000-m03 has CIDR [10.244.2.0/24] 
	I0805 23:41:55.464991       1 main.go:295] Handling node with IPs: map[192.169.0.13:{}]
	I0805 23:41:55.465012       1 main.go:299] handling current node
	
	
	==> kube-apiserver [608878b33f35] <==
	W0805 23:37:11.486438       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.486583       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.486625       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.486650       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.486674       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.486898       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.486927       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.487716       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.487755       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.487780       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.487847       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.487875       1 logging.go:59] [core] [Channel #76 SubChannel #77] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.489041       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.489104       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.489127       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.489147       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.489171       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.489257       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.489281       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.489307       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.489633       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.489864       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.489935       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0805 23:37:11.490056       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0805 23:37:11.514946       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-apiserver [92bdde18e9bc] <==
	I0805 23:37:42.730543       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0805 23:37:42.736278       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0805 23:37:42.737676       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0805 23:37:42.738333       1 shared_informer.go:320] Caches are synced for configmaps
	I0805 23:37:42.738384       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0805 23:37:42.738390       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0805 23:37:42.739302       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0805 23:37:42.741676       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0805 23:37:42.741754       1 aggregator.go:165] initial CRD sync complete...
	I0805 23:37:42.741787       1 autoregister_controller.go:141] Starting autoregister controller
	I0805 23:37:42.741831       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0805 23:37:42.741875       1 cache.go:39] Caches are synced for autoregister controller
	E0805 23:37:42.744121       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0805 23:37:42.798361       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0805 23:37:42.804367       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0805 23:37:42.804860       1 policy_source.go:224] refreshing policies
	I0805 23:37:42.821782       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0805 23:37:43.633884       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0805 23:37:44.781620       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0805 23:37:44.898279       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0805 23:37:44.905563       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0805 23:37:44.945734       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0805 23:37:44.950191       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0805 23:37:55.099564       1 controller.go:615] quota admission added evaluator for: endpoints
	I0805 23:37:55.156540       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [b348fa62c4a5] <==
	I0805 23:37:55.228925       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0805 23:37:55.237882       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0805 23:37:55.255771       1 shared_informer.go:320] Caches are synced for PVC protection
	I0805 23:37:55.263474       1 shared_informer.go:320] Caches are synced for attach detach
	I0805 23:37:55.298454       1 shared_informer.go:320] Caches are synced for ephemeral
	I0805 23:37:55.302814       1 shared_informer.go:320] Caches are synced for resource quota
	I0805 23:37:55.314458       1 shared_informer.go:320] Caches are synced for expand
	I0805 23:37:55.338263       1 shared_informer.go:320] Caches are synced for stateful set
	I0805 23:37:55.343814       1 shared_informer.go:320] Caches are synced for resource quota
	I0805 23:37:55.345575       1 shared_informer.go:320] Caches are synced for persistent volume
	I0805 23:37:55.730758       1 shared_informer.go:320] Caches are synced for garbage collector
	I0805 23:37:55.734111       1 shared_informer.go:320] Caches are synced for garbage collector
	I0805 23:37:55.734173       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0805 23:37:57.213036       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-985000-m03"
	I0805 23:38:00.018589       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="34.728µs"
	I0805 23:38:00.035169       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.837404ms"
	I0805 23:38:00.036511       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="41.223µs"
	I0805 23:38:01.038943       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.071233ms"
	I0805 23:38:01.039751       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.639µs"
	I0805 23:38:35.241010       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.858922ms"
	I0805 23:38:35.241084       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="36.645µs"
	I0805 23:39:21.044225       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.20437ms"
	I0805 23:39:21.049044       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.579162ms"
	I0805 23:39:21.049312       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="82.313µs"
	E0805 23:41:55.098115       1 gc_controller.go:153] "Failed to get node" err="node \"multinode-985000-m03\" not found" logger="pod-garbage-collector-controller" node="multinode-985000-m03"
	
	
	==> kube-controller-manager [d11865076c64] <==
	I0805 23:22:59.132399       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.529µs"
	I0805 23:34:49.118620       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-985000-m03\" does not exist"
	I0805 23:34:49.123685       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-985000-m03" podCIDRs=["10.244.1.0/24"]
	I0805 23:34:49.553799       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-985000-m03"
	I0805 23:35:12.244278       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-985000-m03"
	I0805 23:35:12.252224       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="22.969µs"
	I0805 23:35:12.259725       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.754µs"
	I0805 23:35:14.267796       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.716009ms"
	I0805 23:35:14.267862       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.069µs"
	I0805 23:35:51.179064       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.106041ms"
	I0805 23:35:51.195857       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.438177ms"
	I0805 23:35:51.211043       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.139069ms"
	I0805 23:35:51.211379       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="291.666µs"
	I0805 23:35:55.268521       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-985000-m03\" does not exist"
	I0805 23:35:55.272637       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-985000-m03" podCIDRs=["10.244.2.0/24"]
	I0805 23:35:57.161739       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.697µs"
	I0805 23:36:10.485777       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-985000-m03"
	I0805 23:36:10.496807       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="88.532µs"
	I0805 23:36:19.181053       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.67µs"
	I0805 23:36:19.184540       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.764µs"
	I0805 23:36:19.191433       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="29.037µs"
	I0805 23:36:19.365196       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.813µs"
	I0805 23:36:19.367176       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.532µs"
	I0805 23:36:20.387745       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.044943ms"
	I0805 23:36:20.388000       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="19.528µs"
	
	
	==> kube-proxy [413cda260d21] <==
	I0805 23:37:44.324911       1 server_linux.go:69] "Using iptables proxy"
	I0805 23:37:44.341877       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.13"]
	I0805 23:37:44.398640       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 23:37:44.398662       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 23:37:44.398675       1 server_linux.go:165] "Using iptables Proxier"
	I0805 23:37:44.401178       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 23:37:44.401588       1 server.go:872] "Version info" version="v1.30.3"
	I0805 23:37:44.401598       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:37:44.402850       1 config.go:192] "Starting service config controller"
	I0805 23:37:44.403035       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 23:37:44.403115       1 config.go:101] "Starting endpoint slice config controller"
	I0805 23:37:44.403158       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 23:37:44.403823       1 config.go:319] "Starting node config controller"
	I0805 23:37:44.404599       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 23:37:44.505447       1 shared_informer.go:320] Caches are synced for node config
	I0805 23:37:44.505492       1 shared_informer.go:320] Caches are synced for service config
	I0805 23:37:44.505525       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [d58ca48f9f8b] <==
	I0805 23:21:21.029929       1 server_linux.go:69] "Using iptables proxy"
	I0805 23:21:21.072929       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.13"]
	I0805 23:21:21.105532       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0805 23:21:21.105552       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0805 23:21:21.105563       1 server_linux.go:165] "Using iptables Proxier"
	I0805 23:21:21.107493       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0805 23:21:21.107594       1 server.go:872] "Version info" version="v1.30.3"
	I0805 23:21:21.107602       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:21:21.108477       1 config.go:192] "Starting service config controller"
	I0805 23:21:21.108482       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0805 23:21:21.108492       1 config.go:101] "Starting endpoint slice config controller"
	I0805 23:21:21.108494       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0805 23:21:21.108784       1 config.go:319] "Starting node config controller"
	I0805 23:21:21.108789       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0805 23:21:21.209420       1 shared_informer.go:320] Caches are synced for node config
	I0805 23:21:21.209474       1 shared_informer.go:320] Caches are synced for service config
	I0805 23:21:21.209501       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [792feba1a6f6] <==
	E0805 23:21:04.024229       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0805 23:21:04.024017       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 23:21:04.024329       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 23:21:04.024047       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 23:21:04.024362       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0805 23:21:04.024118       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 23:21:04.024431       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0805 23:21:04.860871       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:04.861069       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0805 23:21:04.959895       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0805 23:21:04.959949       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0805 23:21:04.962444       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0805 23:21:04.962496       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0805 23:21:04.968410       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0805 23:21:04.968452       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0805 23:21:05.030527       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0805 23:21:05.030566       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 23:21:05.076451       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:05.076659       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0805 23:21:05.118159       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0805 23:21:05.118676       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0805 23:21:05.141945       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0805 23:21:05.142020       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0805 23:21:08.218627       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0805 23:37:11.443644       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [ff391cbc1ee5] <==
	I0805 23:37:40.960901       1 serving.go:380] Generated self-signed cert in-memory
	W0805 23:37:42.679762       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0805 23:37:42.679944       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0805 23:37:42.680026       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0805 23:37:42.680120       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0805 23:37:42.720120       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0805 23:37:42.720155       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0805 23:37:42.722970       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0805 23:37:42.723116       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0805 23:37:42.722988       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0805 23:37:42.723009       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0805 23:37:42.824314       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 05 23:37:57 multinode-985000 kubelet[1421]: I0805 23:37:57.206744    1421 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	Aug 05 23:38:15 multinode-985000 kubelet[1421]: I0805 23:38:15.134093    1421 scope.go:117] "RemoveContainer" containerID="3d9fd612d0b14777e3c2f36e84aa669c6aba33c9885ee2054f4dc5d9183e18fa"
	Aug 05 23:38:15 multinode-985000 kubelet[1421]: I0805 23:38:15.134335    1421 scope.go:117] "RemoveContainer" containerID="0d0f4c86d1e8c797cb0c58d08f505521679191138c65b7051df09ccf4e702a25"
	Aug 05 23:38:15 multinode-985000 kubelet[1421]: E0805 23:38:15.134437    1421 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(72ec8458-5c62-43eb-9120-0146e6ccaf8f)\"" pod="kube-system/storage-provisioner" podUID="72ec8458-5c62-43eb-9120-0146e6ccaf8f"
	Aug 05 23:38:27 multinode-985000 kubelet[1421]: I0805 23:38:27.652833    1421 scope.go:117] "RemoveContainer" containerID="0d0f4c86d1e8c797cb0c58d08f505521679191138c65b7051df09ccf4e702a25"
	Aug 05 23:38:39 multinode-985000 kubelet[1421]: E0805 23:38:39.676906    1421 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:38:39 multinode-985000 kubelet[1421]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:38:39 multinode-985000 kubelet[1421]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:38:39 multinode-985000 kubelet[1421]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:38:39 multinode-985000 kubelet[1421]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:39:39 multinode-985000 kubelet[1421]: E0805 23:39:39.671510    1421 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:39:39 multinode-985000 kubelet[1421]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:39:39 multinode-985000 kubelet[1421]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:39:39 multinode-985000 kubelet[1421]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:39:39 multinode-985000 kubelet[1421]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:40:39 multinode-985000 kubelet[1421]: E0805 23:40:39.673179    1421 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:40:39 multinode-985000 kubelet[1421]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:40:39 multinode-985000 kubelet[1421]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:40:39 multinode-985000 kubelet[1421]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:40:39 multinode-985000 kubelet[1421]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 05 23:41:39 multinode-985000 kubelet[1421]: E0805 23:41:39.672796    1421 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 05 23:41:39 multinode-985000 kubelet[1421]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 05 23:41:39 multinode-985000 kubelet[1421]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 05 23:41:39 multinode-985000 kubelet[1421]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 05 23:41:39 multinode-985000 kubelet[1421]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-985000 -n multinode-985000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-985000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-2jkfh
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/DeleteNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-985000 describe pod busybox-fc5497c4f-2jkfh
helpers_test.go:282: (dbg) kubectl --context multinode-985000 describe pod busybox-fc5497c4f-2jkfh:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-2jkfh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lxhpc (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-lxhpc:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age    From               Message
	  ----     ------            ----   ----               -------
	  Warning  FailedScheduling  2m36s  default-scheduler  0/2 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 1 No preemption victims found for incoming pod, 1 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/DeleteNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeleteNode (157.32s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (76.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-985000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-985000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : exit status 90 (1m15.972060568s)

                                                
                                                
-- stdout --
	* [multinode-985000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "multinode-985000" primary control-plane node in "multinode-985000" cluster
	* Restarting existing hyperkit VM for "multinode-985000" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:42:14.602524    5659 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:42:14.602682    5659 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:42:14.602687    5659 out.go:304] Setting ErrFile to fd 2...
	I0805 16:42:14.602691    5659 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:42:14.602868    5659 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:42:14.604291    5659 out.go:298] Setting JSON to false
	I0805 16:42:14.626844    5659 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":4305,"bootTime":1722897029,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0805 16:42:14.626936    5659 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:42:14.648695    5659 out.go:177] * [multinode-985000] minikube v1.33.1 on Darwin 14.5
	I0805 16:42:14.670594    5659 notify.go:220] Checking for updates...
	I0805 16:42:14.692423    5659 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:42:14.714445    5659 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:42:14.735431    5659 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0805 16:42:14.756677    5659 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:42:14.777480    5659 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:42:14.798704    5659 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:42:14.820837    5659 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:42:14.821189    5659 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:42:14.821236    5659 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:42:14.830116    5659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53270
	I0805 16:42:14.830477    5659 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:42:14.830885    5659 main.go:141] libmachine: Using API Version  1
	I0805 16:42:14.830916    5659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:42:14.831178    5659 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:42:14.831321    5659 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:42:14.831560    5659 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:42:14.831817    5659 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:42:14.831844    5659 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:42:14.840128    5659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53272
	I0805 16:42:14.840448    5659 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:42:14.840793    5659 main.go:141] libmachine: Using API Version  1
	I0805 16:42:14.840802    5659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:42:14.841043    5659 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:42:14.841169    5659 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:42:14.869473    5659 out.go:177] * Using the hyperkit driver based on existing profile
	I0805 16:42:14.911667    5659 start.go:297] selected driver: hyperkit
	I0805 16:42:14.911692    5659 start.go:901] validating driver "hyperkit" against &{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false k
ubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:42:14.911937    5659 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:42:14.912135    5659 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:42:14.912337    5659 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19373-1122/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0805 16:42:14.922021    5659 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0805 16:42:14.925896    5659 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:42:14.925918    5659 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0805 16:42:14.928721    5659 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0805 16:42:14.928789    5659 cni.go:84] Creating CNI manager for ""
	I0805 16:42:14.928799    5659 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0805 16:42:14.928878    5659 start.go:340] cluster config:
	{Name:multinode-985000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-985000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.13 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.14 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plu
gin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:42:14.928992    5659 iso.go:125] acquiring lock: {Name:mk71e8d40232ece83c91dc82184f03ab93aee56e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 16:42:14.971660    5659 out.go:177] * Starting "multinode-985000" primary control-plane node in "multinode-985000" cluster
	I0805 16:42:14.993561    5659 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 16:42:14.993636    5659 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0805 16:42:14.993655    5659 cache.go:56] Caching tarball of preloaded images
	I0805 16:42:14.993857    5659 preload.go:172] Found /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0805 16:42:14.993875    5659 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 16:42:14.994074    5659 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:42:14.995011    5659 start.go:360] acquireMachinesLock for multinode-985000: {Name:mkf9436dd3ff8caf2e1647b5a407c7f362b7aeb6 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0805 16:42:14.995136    5659 start.go:364] duration metric: took 99.824µs to acquireMachinesLock for "multinode-985000"
	I0805 16:42:14.995178    5659 start.go:96] Skipping create...Using existing machine configuration
	I0805 16:42:14.995228    5659 fix.go:54] fixHost starting: 
	I0805 16:42:14.995597    5659 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:42:14.995625    5659 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:42:15.004676    5659 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53274
	I0805 16:42:15.005020    5659 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:42:15.005377    5659 main.go:141] libmachine: Using API Version  1
	I0805 16:42:15.005391    5659 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:42:15.005636    5659 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:42:15.005760    5659 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:42:15.005874    5659 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:42:15.005969    5659 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:42:15.006044    5659 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 5533
	I0805 16:42:15.006997    5659 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid 5533 missing from process table
	I0805 16:42:15.007034    5659 fix.go:112] recreateIfNeeded on multinode-985000: state=Stopped err=<nil>
	I0805 16:42:15.007054    5659 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	W0805 16:42:15.007137    5659 fix.go:138] unexpected machine state, will restart: <nil>
	I0805 16:42:15.049443    5659 out.go:177] * Restarting existing hyperkit VM for "multinode-985000" ...
	I0805 16:42:15.070729    5659 main.go:141] libmachine: (multinode-985000) Calling .Start
	I0805 16:42:15.071054    5659 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:42:15.071131    5659 main.go:141] libmachine: (multinode-985000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid
	I0805 16:42:15.073005    5659 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid 5533 missing from process table
	I0805 16:42:15.073028    5659 main.go:141] libmachine: (multinode-985000) DBG | pid 5533 is in state "Stopped"
	I0805 16:42:15.073046    5659 main.go:141] libmachine: (multinode-985000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid...
	I0805 16:42:15.073258    5659 main.go:141] libmachine: (multinode-985000) DBG | Using UUID 3ac698fc-f622-443b-898d-9b152fa64288
	I0805 16:42:15.183247    5659 main.go:141] libmachine: (multinode-985000) DBG | Generated MAC e2:6:14:d2:13:ae
	I0805 16:42:15.183273    5659 main.go:141] libmachine: (multinode-985000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000
	I0805 16:42:15.183393    5659 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:42:15 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3ac698fc-f622-443b-898d-9b152fa64288", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bc9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0805 16:42:15.183431    5659 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:42:15 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"3ac698fc-f622-443b-898d-9b152fa64288", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bc9c0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0805 16:42:15.183467    5659 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:42:15 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "3ac698fc-f622-443b-898d-9b152fa64288", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/multinode-985000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage,/Users/jenkins/minikube-integration/1937
3-1122/.minikube/machines/multinode-985000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"}
	I0805 16:42:15.183538    5659 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:42:15 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 3ac698fc-f622-443b-898d-9b152fa64288 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/multinode-985000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/tty,log=/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/console-ring -f kexec,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/bzimage,/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-985000"
	I0805 16:42:15.183555    5659 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:42:15 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0805 16:42:15.184994    5659 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:42:15 DEBUG: hyperkit: Pid is 5673
	I0805 16:42:15.185389    5659 main.go:141] libmachine: (multinode-985000) DBG | Attempt 0
	I0805 16:42:15.185402    5659 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:42:15.185477    5659 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 5673
	I0805 16:42:15.187072    5659 main.go:141] libmachine: (multinode-985000) DBG | Searching for e2:6:14:d2:13:ae in /var/db/dhcpd_leases ...
	I0805 16:42:15.187143    5659 main.go:141] libmachine: (multinode-985000) DBG | Found 14 entries in /var/db/dhcpd_leases!
	I0805 16:42:15.187156    5659 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a6:1c:88:9c:44:3 ID:1,a6:1c:88:9c:44:3 Lease:0x66b2b3e2}
	I0805 16:42:15.187178    5659 main.go:141] libmachine: (multinode-985000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:e2:6:14:d2:13:ae ID:1,e2:6:14:d2:13:ae Lease:0x66b2b3b9}
	I0805 16:42:15.187190    5659 main.go:141] libmachine: (multinode-985000) DBG | Found match: e2:6:14:d2:13:ae
	I0805 16:42:15.187213    5659 main.go:141] libmachine: (multinode-985000) DBG | IP: 192.169.0.13
	I0805 16:42:15.187246    5659 main.go:141] libmachine: (multinode-985000) Calling .GetConfigRaw
	I0805 16:42:15.187952    5659 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:42:15.188190    5659 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/multinode-985000/config.json ...
	I0805 16:42:15.188736    5659 machine.go:94] provisionDockerMachine start ...
	I0805 16:42:15.188748    5659 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:42:15.188897    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:42:15.189010    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:42:15.189124    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:42:15.189222    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:42:15.189318    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:42:15.189428    5659 main.go:141] libmachine: Using SSH client type: native
	I0805 16:42:15.189618    5659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb98a0c0] 0xb98ce20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:42:15.189628    5659 main.go:141] libmachine: About to run SSH command:
	hostname
	I0805 16:42:15.192845    5659 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:42:15 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0805 16:42:15.244890    5659 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:42:15 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0805 16:42:15.245617    5659 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:42:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:42:15.245632    5659 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:42:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:42:15.245655    5659 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:42:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:42:15.245681    5659 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:42:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:42:15.628616    5659 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:42:15 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0805 16:42:15.628632    5659 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:42:15 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0805 16:42:15.743067    5659 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:42:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0805 16:42:15.743083    5659 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:42:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0805 16:42:15.743094    5659 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:42:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0805 16:42:15.743102    5659 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:42:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0805 16:42:15.743996    5659 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:42:15 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0805 16:42:15.744008    5659 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:42:15 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0805 16:42:21.326955    5659 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:42:21 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0805 16:42:21.326976    5659 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:42:21 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0805 16:42:21.326995    5659 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:42:21 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0805 16:42:21.351514    5659 main.go:141] libmachine: (multinode-985000) DBG | 2024/08/05 16:42:21 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0805 16:42:26.252830    5659 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0805 16:42:26.252844    5659 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:42:26.252982    5659 buildroot.go:166] provisioning hostname "multinode-985000"
	I0805 16:42:26.252994    5659 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:42:26.253123    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:42:26.253211    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:42:26.253300    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:42:26.253392    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:42:26.253486    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:42:26.253620    5659 main.go:141] libmachine: Using SSH client type: native
	I0805 16:42:26.253772    5659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb98a0c0] 0xb98ce20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:42:26.253781    5659 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-985000 && echo "multinode-985000" | sudo tee /etc/hostname
	I0805 16:42:26.317030    5659 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-985000
	
	I0805 16:42:26.317048    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:42:26.317176    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:42:26.317280    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:42:26.317386    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:42:26.317467    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:42:26.317590    5659 main.go:141] libmachine: Using SSH client type: native
	I0805 16:42:26.317747    5659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb98a0c0] 0xb98ce20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:42:26.317758    5659 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-985000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-985000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-985000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0805 16:42:26.374183    5659 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0805 16:42:26.374202    5659 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19373-1122/.minikube CaCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19373-1122/.minikube}
	I0805 16:42:26.374217    5659 buildroot.go:174] setting up certificates
	I0805 16:42:26.374224    5659 provision.go:84] configureAuth start
	I0805 16:42:26.374231    5659 main.go:141] libmachine: (multinode-985000) Calling .GetMachineName
	I0805 16:42:26.374360    5659 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:42:26.374457    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:42:26.374542    5659 provision.go:143] copyHostCerts
	I0805 16:42:26.374582    5659 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:42:26.374658    5659 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem, removing ...
	I0805 16:42:26.374667    5659 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem
	I0805 16:42:26.374820    5659 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/cert.pem (1123 bytes)
	I0805 16:42:26.375027    5659 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:42:26.375067    5659 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem, removing ...
	I0805 16:42:26.375072    5659 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem
	I0805 16:42:26.375159    5659 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/key.pem (1675 bytes)
	I0805 16:42:26.375295    5659 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:42:26.375335    5659 exec_runner.go:144] found /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem, removing ...
	I0805 16:42:26.375340    5659 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem
	I0805 16:42:26.375424    5659 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19373-1122/.minikube/ca.pem (1082 bytes)
	I0805 16:42:26.375561    5659 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca-key.pem org=jenkins.multinode-985000 san=[127.0.0.1 192.169.0.13 localhost minikube multinode-985000]
	I0805 16:42:26.515656    5659 provision.go:177] copyRemoteCerts
	I0805 16:42:26.515712    5659 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0805 16:42:26.515730    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:42:26.515870    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:42:26.515971    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:42:26.516059    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:42:26.516140    5659 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:42:26.548371    5659 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0805 16:42:26.548463    5659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0805 16:42:26.567260    5659 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0805 16:42:26.567322    5659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0805 16:42:26.586444    5659 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0805 16:42:26.586510    5659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0805 16:42:26.605884    5659 provision.go:87] duration metric: took 231.64576ms to configureAuth
	I0805 16:42:26.605897    5659 buildroot.go:189] setting minikube options for container-runtime
	I0805 16:42:26.606062    5659 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:42:26.606075    5659 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:42:26.606204    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:42:26.606280    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:42:26.606352    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:42:26.606437    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:42:26.606522    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:42:26.606639    5659 main.go:141] libmachine: Using SSH client type: native
	I0805 16:42:26.606778    5659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb98a0c0] 0xb98ce20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:42:26.606785    5659 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0805 16:42:26.656026    5659 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0805 16:42:26.656037    5659 buildroot.go:70] root file system type: tmpfs
	I0805 16:42:26.656118    5659 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0805 16:42:26.656134    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:42:26.656263    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:42:26.656361    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:42:26.656460    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:42:26.656556    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:42:26.656680    5659 main.go:141] libmachine: Using SSH client type: native
	I0805 16:42:26.656830    5659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb98a0c0] 0xb98ce20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:42:26.656875    5659 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0805 16:42:26.718007    5659 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0805 16:42:26.718035    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:42:26.718170    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:42:26.718286    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:42:26.718402    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:42:26.718496    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:42:26.718628    5659 main.go:141] libmachine: Using SSH client type: native
	I0805 16:42:26.718778    5659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb98a0c0] 0xb98ce20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:42:26.718790    5659 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0805 16:42:28.402511    5659 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0805 16:42:28.402526    5659 machine.go:97] duration metric: took 13.213764117s to provisionDockerMachine
	I0805 16:42:28.402540    5659 start.go:293] postStartSetup for "multinode-985000" (driver="hyperkit")
	I0805 16:42:28.402547    5659 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0805 16:42:28.402562    5659 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:42:28.402759    5659 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0805 16:42:28.402773    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:42:28.402872    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:42:28.402962    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:42:28.403038    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:42:28.403130    5659 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:42:28.443497    5659 ssh_runner.go:195] Run: cat /etc/os-release
	I0805 16:42:28.447494    5659 command_runner.go:130] > NAME=Buildroot
	I0805 16:42:28.447503    5659 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0805 16:42:28.447507    5659 command_runner.go:130] > ID=buildroot
	I0805 16:42:28.447511    5659 command_runner.go:130] > VERSION_ID=2023.02.9
	I0805 16:42:28.447515    5659 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0805 16:42:28.447664    5659 info.go:137] Remote host: Buildroot 2023.02.9
	I0805 16:42:28.447680    5659 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/addons for local assets ...
	I0805 16:42:28.447787    5659 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19373-1122/.minikube/files for local assets ...
	I0805 16:42:28.447976    5659 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> 16782.pem in /etc/ssl/certs
	I0805 16:42:28.447982    5659 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem -> /etc/ssl/certs/16782.pem
	I0805 16:42:28.448196    5659 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0805 16:42:28.460277    5659 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/ssl/certs/16782.pem --> /etc/ssl/certs/16782.pem (1708 bytes)
	I0805 16:42:28.483553    5659 start.go:296] duration metric: took 81.00439ms for postStartSetup
	I0805 16:42:28.483578    5659 fix.go:56] duration metric: took 13.488344017s for fixHost
	I0805 16:42:28.483591    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:42:28.483720    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:42:28.483808    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:42:28.483893    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:42:28.483964    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:42:28.484072    5659 main.go:141] libmachine: Using SSH client type: native
	I0805 16:42:28.484214    5659 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb98a0c0] 0xb98ce20 <nil>  [] 0s} 192.169.0.13 22 <nil> <nil>}
	I0805 16:42:28.484221    5659 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0805 16:42:28.532364    5659 main.go:141] libmachine: SSH cmd err, output: <nil>: 1722901348.693295049
	
	I0805 16:42:28.532375    5659 fix.go:216] guest clock: 1722901348.693295049
	I0805 16:42:28.532381    5659 fix.go:229] Guest: 2024-08-05 16:42:28.693295049 -0700 PDT Remote: 2024-08-05 16:42:28.483581 -0700 PDT m=+13.916480118 (delta=209.714049ms)
	I0805 16:42:28.532403    5659 fix.go:200] guest clock delta is within tolerance: 209.714049ms
	I0805 16:42:28.532407    5659 start.go:83] releasing machines lock for "multinode-985000", held for 13.53724298s
	I0805 16:42:28.532426    5659 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:42:28.532545    5659 main.go:141] libmachine: (multinode-985000) Calling .GetIP
	I0805 16:42:28.532637    5659 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:42:28.532923    5659 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:42:28.533042    5659 main.go:141] libmachine: (multinode-985000) Calling .DriverName
	I0805 16:42:28.533120    5659 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0805 16:42:28.533148    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:42:28.533200    5659 ssh_runner.go:195] Run: cat /version.json
	I0805 16:42:28.533214    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHHostname
	I0805 16:42:28.533242    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:42:28.533306    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHPort
	I0805 16:42:28.533327    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:42:28.533389    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:42:28.533416    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHKeyPath
	I0805 16:42:28.533471    5659 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:42:28.533501    5659 main.go:141] libmachine: (multinode-985000) Calling .GetSSHUsername
	I0805 16:42:28.533571    5659 sshutil.go:53] new ssh client: &{IP:192.169.0.13 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/multinode-985000/id_rsa Username:docker}
	I0805 16:42:28.612240    5659 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0805 16:42:28.613312    5659 command_runner.go:130] > {"iso_version": "v1.33.1-1722248113-19339", "kicbase_version": "v0.0.44-1721902582-19326", "minikube_version": "v1.33.1", "commit": "b8389556a97747a5bbaa1906d238251ad536d76e"}
	I0805 16:42:28.613501    5659 ssh_runner.go:195] Run: systemctl --version
	I0805 16:42:28.618421    5659 command_runner.go:130] > systemd 252 (252)
	I0805 16:42:28.618438    5659 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0805 16:42:28.618598    5659 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0805 16:42:28.622807    5659 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0805 16:42:28.622872    5659 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0805 16:42:28.622910    5659 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0805 16:42:28.635309    5659 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0805 16:42:28.635335    5659 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0805 16:42:28.635344    5659 start.go:495] detecting cgroup driver to use...
	I0805 16:42:28.635438    5659 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:42:28.650271    5659 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0805 16:42:28.650545    5659 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0805 16:42:28.659597    5659 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0805 16:42:28.668425    5659 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0805 16:42:28.668465    5659 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0805 16:42:28.677182    5659 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:42:28.686104    5659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0805 16:42:28.694924    5659 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0805 16:42:28.703832    5659 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0805 16:42:28.712750    5659 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0805 16:42:28.721720    5659 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0805 16:42:28.730442    5659 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0805 16:42:28.739501    5659 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0805 16:42:28.747544    5659 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0805 16:42:28.747708    5659 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0805 16:42:28.755719    5659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:42:28.853697    5659 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0805 16:42:28.872741    5659 start.go:495] detecting cgroup driver to use...
	I0805 16:42:28.872824    5659 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0805 16:42:28.897109    5659 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0805 16:42:28.897122    5659 command_runner.go:130] > [Unit]
	I0805 16:42:28.897127    5659 command_runner.go:130] > Description=Docker Application Container Engine
	I0805 16:42:28.897131    5659 command_runner.go:130] > Documentation=https://docs.docker.com
	I0805 16:42:28.897136    5659 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0805 16:42:28.897140    5659 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0805 16:42:28.897146    5659 command_runner.go:130] > StartLimitBurst=3
	I0805 16:42:28.897150    5659 command_runner.go:130] > StartLimitIntervalSec=60
	I0805 16:42:28.897153    5659 command_runner.go:130] > [Service]
	I0805 16:42:28.897157    5659 command_runner.go:130] > Type=notify
	I0805 16:42:28.897161    5659 command_runner.go:130] > Restart=on-failure
	I0805 16:42:28.897167    5659 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0805 16:42:28.897175    5659 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0805 16:42:28.897181    5659 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0805 16:42:28.897192    5659 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0805 16:42:28.897198    5659 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0805 16:42:28.897204    5659 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0805 16:42:28.897211    5659 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0805 16:42:28.897220    5659 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0805 16:42:28.897226    5659 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0805 16:42:28.897231    5659 command_runner.go:130] > ExecStart=
	I0805 16:42:28.897243    5659 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0805 16:42:28.897248    5659 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0805 16:42:28.897254    5659 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0805 16:42:28.897260    5659 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0805 16:42:28.897264    5659 command_runner.go:130] > LimitNOFILE=infinity
	I0805 16:42:28.897267    5659 command_runner.go:130] > LimitNPROC=infinity
	I0805 16:42:28.897271    5659 command_runner.go:130] > LimitCORE=infinity
	I0805 16:42:28.897276    5659 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0805 16:42:28.897280    5659 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0805 16:42:28.897284    5659 command_runner.go:130] > TasksMax=infinity
	I0805 16:42:28.897288    5659 command_runner.go:130] > TimeoutStartSec=0
	I0805 16:42:28.897293    5659 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0805 16:42:28.897296    5659 command_runner.go:130] > Delegate=yes
	I0805 16:42:28.897301    5659 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0805 16:42:28.897305    5659 command_runner.go:130] > KillMode=process
	I0805 16:42:28.897308    5659 command_runner.go:130] > [Install]
	I0805 16:42:28.897317    5659 command_runner.go:130] > WantedBy=multi-user.target
	I0805 16:42:28.897388    5659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:42:28.909434    5659 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0805 16:42:28.927989    5659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0805 16:42:28.939846    5659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:42:28.950762    5659 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0805 16:42:28.971525    5659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0805 16:42:28.981746    5659 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0805 16:42:28.996341    5659 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0805 16:42:28.996553    5659 ssh_runner.go:195] Run: which cri-dockerd
	I0805 16:42:28.999502    5659 command_runner.go:130] > /usr/bin/cri-dockerd
	I0805 16:42:28.999661    5659 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0805 16:42:29.006802    5659 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0805 16:42:29.020541    5659 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0805 16:42:29.113120    5659 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0805 16:42:29.223010    5659 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0805 16:42:29.223084    5659 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0805 16:42:29.236940    5659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0805 16:42:29.328523    5659 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0805 16:43:30.406092    5659 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0805 16:43:30.406107    5659 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0805 16:43:30.406125    5659 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.025107532s)
	I0805 16:43:30.406180    5659 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0805 16:43:30.418130    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 systemd[1]: Starting Docker Application Container Engine...
	I0805 16:43:30.418143    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[489]: time="2024-08-05T23:42:27.172051629Z" level=info msg="Starting up"
	I0805 16:43:30.418155    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[489]: time="2024-08-05T23:42:27.172532596Z" level=info msg="containerd not running, starting managed containerd"
	I0805 16:43:30.418168    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[489]: time="2024-08-05T23:42:27.173140759Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=496
	I0805 16:43:30.418178    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.189839802Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0805 16:43:30.418188    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.204988613Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0805 16:43:30.418198    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205010910Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0805 16:43:30.418207    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205050825Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0805 16:43:30.418216    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205061776Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0805 16:43:30.418230    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205205654Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:43:30.418239    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205271378Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:43:30.418258    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205384517Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:43:30.418267    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205418839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0805 16:43:30.418278    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205431289Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:43:30.418288    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205438576Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0805 16:43:30.418297    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205575437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:43:30.418307    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205781727Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0805 16:43:30.418321    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.207248757Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:43:30.418330    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.207306355Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0805 16:43:30.418435    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.207445168Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0805 16:43:30.418457    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.207488147Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0805 16:43:30.418467    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.207594283Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0805 16:43:30.418475    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.207650212Z" level=info msg="metadata content store policy set" policy=shared
	I0805 16:43:30.418486    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209034543Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0805 16:43:30.418494    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209101962Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0805 16:43:30.418502    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209170981Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0805 16:43:30.418510    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209214144Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0805 16:43:30.418518    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209249563Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0805 16:43:30.418526    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209348684Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0805 16:43:30.418535    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209530812Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0805 16:43:30.418543    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209611615Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0805 16:43:30.418552    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209648943Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0805 16:43:30.418561    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209679386Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0805 16:43:30.418571    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209711106Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0805 16:43:30.418580    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209743780Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0805 16:43:30.418589    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209779797Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0805 16:43:30.418598    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209813169Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0805 16:43:30.418607    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209844041Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0805 16:43:30.418616    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209879845Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0805 16:43:30.418626    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209909920Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0805 16:43:30.418761    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209937882Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0805 16:43:30.418773    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209977036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0805 16:43:30.418787    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210012358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0805 16:43:30.418796    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210041866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0805 16:43:30.418810    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210074563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0805 16:43:30.418819    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210104358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0805 16:43:30.418828    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210132969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0805 16:43:30.418836    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210160994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0805 16:43:30.418845    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210189749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0805 16:43:30.418854    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210218400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0805 16:43:30.418862    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210249418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0805 16:43:30.418871    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210280621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0805 16:43:30.418882    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210309307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0805 16:43:30.418891    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210340360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0805 16:43:30.418900    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210371621Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0805 16:43:30.418909    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210405765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0805 16:43:30.418922    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210435688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0805 16:43:30.418931    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210464560Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0805 16:43:30.418940    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210560257Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0805 16:43:30.418952    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210602861Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0805 16:43:30.418961    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210833643Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0805 16:43:30.418972    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210877302Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0805 16:43:30.419042    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210910825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0805 16:43:30.419054    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210946182Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0805 16:43:30.419062    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210977967Z" level=info msg="NRI interface is disabled by configuration."
	I0805 16:43:30.419070    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.211202600Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0805 16:43:30.419078    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.211289403Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0805 16:43:30.419087    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.211348056Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0805 16:43:30.419094    5659 command_runner.go:130] > Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.211382053Z" level=info msg="containerd successfully booted in 0.022358s"
	I0805 16:43:30.419102    5659 command_runner.go:130] > Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.198164954Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0805 16:43:30.419109    5659 command_runner.go:130] > Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.237912977Z" level=info msg="Loading containers: start."
	I0805 16:43:30.419131    5659 command_runner.go:130] > Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.428877151Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0805 16:43:30.419142    5659 command_runner.go:130] > Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.487813026Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0805 16:43:30.419154    5659 command_runner.go:130] > Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.532027871Z" level=warning msg="error locating sandbox id aa27481cd89075c0ef8688e6c2dca6e1138a26652b0a9b17835c08e54c57de4d: sandbox aa27481cd89075c0ef8688e6c2dca6e1138a26652b0a9b17835c08e54c57de4d not found"
	I0805 16:43:30.419165    5659 command_runner.go:130] > Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.532091351Z" level=warning msg="error locating sandbox id 195c8a255d9020f76e8222364f80d2b2c432852740e9ea52fa331b4f96c05736: sandbox 195c8a255d9020f76e8222364f80d2b2c432852740e9ea52fa331b4f96c05736 not found"
	I0805 16:43:30.419173    5659 command_runner.go:130] > Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.532351059Z" level=info msg="Loading containers: done."
	I0805 16:43:30.419181    5659 command_runner.go:130] > Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.539610870Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	I0805 16:43:30.419189    5659 command_runner.go:130] > Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.539769626Z" level=info msg="Daemon has completed initialization"
	I0805 16:43:30.419196    5659 command_runner.go:130] > Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.561409200Z" level=info msg="API listen on /var/run/docker.sock"
	I0805 16:43:30.419212    5659 command_runner.go:130] > Aug 05 23:42:28 multinode-985000 systemd[1]: Started Docker Application Container Engine.
	I0805 16:43:30.419234    5659 command_runner.go:130] > Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.563511550Z" level=info msg="API listen on [::]:2376"
	I0805 16:43:30.419244    5659 command_runner.go:130] > Aug 05 23:42:29 multinode-985000 dockerd[489]: time="2024-08-05T23:42:29.501925302Z" level=info msg="Processing signal 'terminated'"
	I0805 16:43:30.419255    5659 command_runner.go:130] > Aug 05 23:42:29 multinode-985000 dockerd[489]: time="2024-08-05T23:42:29.502849406Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0805 16:43:30.419262    5659 command_runner.go:130] > Aug 05 23:42:29 multinode-985000 dockerd[489]: time="2024-08-05T23:42:29.502885473Z" level=info msg="Daemon shutdown complete"
	I0805 16:43:30.419303    5659 command_runner.go:130] > Aug 05 23:42:29 multinode-985000 dockerd[489]: time="2024-08-05T23:42:29.502922180Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0805 16:43:30.419313    5659 command_runner.go:130] > Aug 05 23:42:29 multinode-985000 dockerd[489]: time="2024-08-05T23:42:29.502933274Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0805 16:43:30.419320    5659 command_runner.go:130] > Aug 05 23:42:29 multinode-985000 systemd[1]: Stopping Docker Application Container Engine...
	I0805 16:43:30.419326    5659 command_runner.go:130] > Aug 05 23:42:30 multinode-985000 systemd[1]: docker.service: Deactivated successfully.
	I0805 16:43:30.419331    5659 command_runner.go:130] > Aug 05 23:42:30 multinode-985000 systemd[1]: Stopped Docker Application Container Engine.
	I0805 16:43:30.419337    5659 command_runner.go:130] > Aug 05 23:42:30 multinode-985000 systemd[1]: Starting Docker Application Container Engine...
	I0805 16:43:30.419344    5659 command_runner.go:130] > Aug 05 23:42:30 multinode-985000 dockerd[914]: time="2024-08-05T23:42:30.540439916Z" level=info msg="Starting up"
	I0805 16:43:30.419354    5659 command_runner.go:130] > Aug 05 23:43:30 multinode-985000 dockerd[914]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0805 16:43:30.419369    5659 command_runner.go:130] > Aug 05 23:43:30 multinode-985000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0805 16:43:30.419376    5659 command_runner.go:130] > Aug 05 23:43:30 multinode-985000 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0805 16:43:30.419382    5659 command_runner.go:130] > Aug 05 23:43:30 multinode-985000 systemd[1]: Failed to start Docker Application Container Engine.
	I0805 16:43:30.443944    5659 out.go:177] 
	W0805 16:43:30.465818    5659 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 05 23:42:27 multinode-985000 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:42:27 multinode-985000 dockerd[489]: time="2024-08-05T23:42:27.172051629Z" level=info msg="Starting up"
	Aug 05 23:42:27 multinode-985000 dockerd[489]: time="2024-08-05T23:42:27.172532596Z" level=info msg="containerd not running, starting managed containerd"
	Aug 05 23:42:27 multinode-985000 dockerd[489]: time="2024-08-05T23:42:27.173140759Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=496
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.189839802Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.204988613Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205010910Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205050825Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205061776Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205205654Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205271378Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205384517Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205418839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205431289Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205438576Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205575437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205781727Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.207248757Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.207306355Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.207445168Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.207488147Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.207594283Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.207650212Z" level=info msg="metadata content store policy set" policy=shared
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209034543Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209101962Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209170981Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209214144Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209249563Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209348684Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209530812Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209611615Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209648943Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209679386Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209711106Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209743780Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209779797Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209813169Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209844041Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209879845Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209909920Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209937882Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209977036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210012358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210041866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210074563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210104358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210132969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210160994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210189749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210218400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210249418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210280621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210309307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210340360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210371621Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210405765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210435688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210464560Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210560257Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210602861Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210833643Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210877302Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210910825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210946182Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210977967Z" level=info msg="NRI interface is disabled by configuration."
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.211202600Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.211289403Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.211348056Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.211382053Z" level=info msg="containerd successfully booted in 0.022358s"
	Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.198164954Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.237912977Z" level=info msg="Loading containers: start."
	Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.428877151Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.487813026Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.532027871Z" level=warning msg="error locating sandbox id aa27481cd89075c0ef8688e6c2dca6e1138a26652b0a9b17835c08e54c57de4d: sandbox aa27481cd89075c0ef8688e6c2dca6e1138a26652b0a9b17835c08e54c57de4d not found"
	Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.532091351Z" level=warning msg="error locating sandbox id 195c8a255d9020f76e8222364f80d2b2c432852740e9ea52fa331b4f96c05736: sandbox 195c8a255d9020f76e8222364f80d2b2c432852740e9ea52fa331b4f96c05736 not found"
	Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.532351059Z" level=info msg="Loading containers: done."
	Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.539610870Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.539769626Z" level=info msg="Daemon has completed initialization"
	Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.561409200Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 05 23:42:28 multinode-985000 systemd[1]: Started Docker Application Container Engine.
	Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.563511550Z" level=info msg="API listen on [::]:2376"
	Aug 05 23:42:29 multinode-985000 dockerd[489]: time="2024-08-05T23:42:29.501925302Z" level=info msg="Processing signal 'terminated'"
	Aug 05 23:42:29 multinode-985000 dockerd[489]: time="2024-08-05T23:42:29.502849406Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 05 23:42:29 multinode-985000 dockerd[489]: time="2024-08-05T23:42:29.502885473Z" level=info msg="Daemon shutdown complete"
	Aug 05 23:42:29 multinode-985000 dockerd[489]: time="2024-08-05T23:42:29.502922180Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 05 23:42:29 multinode-985000 dockerd[489]: time="2024-08-05T23:42:29.502933274Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 05 23:42:29 multinode-985000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 05 23:42:30 multinode-985000 systemd[1]: docker.service: Deactivated successfully.
	Aug 05 23:42:30 multinode-985000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 05 23:42:30 multinode-985000 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:42:30 multinode-985000 dockerd[914]: time="2024-08-05T23:42:30.540439916Z" level=info msg="Starting up"
	Aug 05 23:43:30 multinode-985000 dockerd[914]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 05 23:43:30 multinode-985000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 05 23:43:30 multinode-985000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 05 23:43:30 multinode-985000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 05 23:42:27 multinode-985000 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:42:27 multinode-985000 dockerd[489]: time="2024-08-05T23:42:27.172051629Z" level=info msg="Starting up"
	Aug 05 23:42:27 multinode-985000 dockerd[489]: time="2024-08-05T23:42:27.172532596Z" level=info msg="containerd not running, starting managed containerd"
	Aug 05 23:42:27 multinode-985000 dockerd[489]: time="2024-08-05T23:42:27.173140759Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=496
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.189839802Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.204988613Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205010910Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205050825Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205061776Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205205654Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205271378Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205384517Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205418839Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205431289Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205438576Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205575437Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.205781727Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.207248757Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.207306355Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.207445168Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.207488147Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.207594283Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.207650212Z" level=info msg="metadata content store policy set" policy=shared
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209034543Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209101962Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209170981Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209214144Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209249563Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209348684Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209530812Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209611615Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209648943Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209679386Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209711106Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209743780Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209779797Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209813169Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209844041Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209879845Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209909920Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209937882Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.209977036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210012358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210041866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210074563Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210104358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210132969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210160994Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210189749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210218400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210249418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210280621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210309307Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210340360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210371621Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210405765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210435688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210464560Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210560257Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210602861Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210833643Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210877302Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210910825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210946182Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.210977967Z" level=info msg="NRI interface is disabled by configuration."
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.211202600Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.211289403Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.211348056Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 05 23:42:27 multinode-985000 dockerd[496]: time="2024-08-05T23:42:27.211382053Z" level=info msg="containerd successfully booted in 0.022358s"
	Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.198164954Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.237912977Z" level=info msg="Loading containers: start."
	Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.428877151Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.487813026Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.532027871Z" level=warning msg="error locating sandbox id aa27481cd89075c0ef8688e6c2dca6e1138a26652b0a9b17835c08e54c57de4d: sandbox aa27481cd89075c0ef8688e6c2dca6e1138a26652b0a9b17835c08e54c57de4d not found"
	Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.532091351Z" level=warning msg="error locating sandbox id 195c8a255d9020f76e8222364f80d2b2c432852740e9ea52fa331b4f96c05736: sandbox 195c8a255d9020f76e8222364f80d2b2c432852740e9ea52fa331b4f96c05736 not found"
	Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.532351059Z" level=info msg="Loading containers: done."
	Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.539610870Z" level=info msg="Docker daemon" commit=cc13f95 containerd-snapshotter=false storage-driver=overlay2 version=27.1.1
	Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.539769626Z" level=info msg="Daemon has completed initialization"
	Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.561409200Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 05 23:42:28 multinode-985000 systemd[1]: Started Docker Application Container Engine.
	Aug 05 23:42:28 multinode-985000 dockerd[489]: time="2024-08-05T23:42:28.563511550Z" level=info msg="API listen on [::]:2376"
	Aug 05 23:42:29 multinode-985000 dockerd[489]: time="2024-08-05T23:42:29.501925302Z" level=info msg="Processing signal 'terminated'"
	Aug 05 23:42:29 multinode-985000 dockerd[489]: time="2024-08-05T23:42:29.502849406Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 05 23:42:29 multinode-985000 dockerd[489]: time="2024-08-05T23:42:29.502885473Z" level=info msg="Daemon shutdown complete"
	Aug 05 23:42:29 multinode-985000 dockerd[489]: time="2024-08-05T23:42:29.502922180Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 05 23:42:29 multinode-985000 dockerd[489]: time="2024-08-05T23:42:29.502933274Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 05 23:42:29 multinode-985000 systemd[1]: Stopping Docker Application Container Engine...
	Aug 05 23:42:30 multinode-985000 systemd[1]: docker.service: Deactivated successfully.
	Aug 05 23:42:30 multinode-985000 systemd[1]: Stopped Docker Application Container Engine.
	Aug 05 23:42:30 multinode-985000 systemd[1]: Starting Docker Application Container Engine...
	Aug 05 23:42:30 multinode-985000 dockerd[914]: time="2024-08-05T23:42:30.540439916Z" level=info msg="Starting up"
	Aug 05 23:43:30 multinode-985000 dockerd[914]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 05 23:43:30 multinode-985000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 05 23:43:30 multinode-985000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 05 23:43:30 multinode-985000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0805 16:43:30.465959    5659 out.go:239] * 
	* 
	W0805 16:43:30.467307    5659 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0805 16:43:30.529759    5659 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-985000 --wait=true -v=8 --alsologtostderr --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-985000 -n multinode-985000: exit status 6 (146.194603ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 16:43:30.724146    5693 status.go:417] kubeconfig endpoint: get endpoint: "multinode-985000" does not appear in /Users/jenkins/minikube-integration/19373-1122/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "multinode-985000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMultiNode/serial/RestartMultiNode (76.13s)

                                                
                                    
x
+
TestScheduledStopUnix (142.31s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-707000 --memory=2048 --driver=hyperkit 
E0805 16:46:50.652186    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-707000 --memory=2048 --driver=hyperkit : exit status 80 (2m16.989607216s)

                                                
                                                
-- stdout --
	* [scheduled-stop-707000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-707000" primary control-plane node in "scheduled-stop-707000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-707000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7e:da:17:69:98:f7
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-707000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 4e:d5:f7:57:a0:39
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 4e:d5:f7:57:a0:39
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-707000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-707000" primary control-plane node in "scheduled-stop-707000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "scheduled-stop-707000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 7e:da:17:69:98:f7
	* Failed to start hyperkit VM. Running "minikube delete -p scheduled-stop-707000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 4e:d5:f7:57:a0:39
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 4e:d5:f7:57:a0:39
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-08-05 16:48:54.856005 -0700 PDT m=+3714.429717340
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-707000 -n scheduled-stop-707000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-707000 -n scheduled-stop-707000: exit status 7 (76.903838ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 16:48:54.931221    5955 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0805 16:48:54.931239    5955 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-707000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "scheduled-stop-707000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-707000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-707000: (5.245639235s)
--- FAIL: TestScheduledStopUnix (142.31s)

                                                
                                    
x
+
TestPause/serial/Start (141.07s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-153000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p pause-153000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : exit status 80 (2m20.991621905s)

                                                
                                                
-- stdout --
	* [pause-153000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "pause-153000" primary control-plane node in "pause-153000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	* Deleting "pause-153000" in hyperkit ...
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for 2a:b2:9d:62:f2:a4
	* Failed to start hyperkit VM. Running "minikube delete -p pause-153000" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for e:92:b1:c6:ff:5e
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: IP address never found in dhcp leases file Temporary error: could not find an IP address for e:92:b1:c6:ff:5e
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-amd64 start -p pause-153000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-153000 -n pause-153000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-153000 -n pause-153000: exit status 7 (81.463027ms)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0805 17:30:22.561956    8760 status.go:352] failed to get driver ip: getting IP: IP address is not set
	E0805 17:30:22.561978    8760 status.go:249] status error: getting IP: IP address is not set

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-153000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestPause/serial/Start (141.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (7201.718s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-709000 --alsologtostderr -v=3
panic: test timed out after 2h0m0s
running tests:
	TestNetworkPlugins (56m9s)
	TestNetworkPlugins/group (7m44s)
	TestStartStop (17m11s)
	TestStartStop/group/default-k8s-diff-port (1m1s)
	TestStartStop/group/default-k8s-diff-port/serial (1m1s)
	TestStartStop/group/default-k8s-diff-port/serial/Stop (1s)
	TestStartStop/group/old-k8s-version (8m21s)
	TestStartStop/group/old-k8s-version/serial (8m21s)
	TestStartStop/group/old-k8s-version/serial/SecondStart (5m32s)

                                                
                                                
goroutine 4061 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/usr/local/go/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 21 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc00041e1a0, 0xc00127bbb0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000914330, {0x143efd00, 0x2a, 0x2a}, {0xfebf825?, 0x119f8fe5?, 0x14412d00?})
	/usr/local/go/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc0008b2640)
	/usr/local/go/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc0008b2640)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 13 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc000699b80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2993 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x1308d3d0, 0xc000144060}, 0xc000094750, 0xc000094798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x1308d3d0, 0xc000144060}, 0x0?, 0xc000094750, 0xc000094798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x1308d3d0?, 0xc000144060?}, 0xc001b64000?, 0xff336a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0000947d0?, 0xff799a4?, 0xc001d06000?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2978
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 182 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0008ed0c0, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 180
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 181 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0009211a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 180
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2730 [chan receive, 17 minutes]:
testing.(*testContext).waitParallel(0xc00080c550)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0006dba00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0006dba00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0006dba00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0006dba00, 0xc000994800)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2728
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 88 [running]:
	goroutine running on other thread; stack unavailable
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 87
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.130.1/klog.go:1137 +0x171

                                                
                                                
goroutine 193 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 176
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2912 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2911
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 176 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x1308d3d0, 0xc000144060}, 0xc0008a9750, 0xc00089ef98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x1308d3d0, 0xc000144060}, 0x0?, 0xc0008a9750, 0xc0008a9798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x1308d3d0?, 0xc000144060?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 182
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 175 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0008ed090, 0x2d)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x12b51860?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc000921080)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0008ed0c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000a7e010, {0x13069620, 0xc00064e180}, 0x1, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000a7e010, 0x3b9aca00, 0x0, 0x1, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 182
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2910 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc001300f50, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x12b51860?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0018081e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001300f80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00162f950, {0x13069620, 0xc0016e09c0}, 0x1, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00162f950, 0x3b9aca00, 0x0, 0x1, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2918
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1458 [select, 104 minutes]:
net/http.(*persistConn).writeLoop(0xc001cc7680)
	/usr/local/go/src/net/http/transport.go:2458 +0xf0
created by net/http.(*Transport).dialConn in goroutine 1450
	/usr/local/go/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 778 [IO wait, 108 minutes]:
internal/poll.runtime_pollWait(0x5bc99480, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0009d2380?, 0x3fe?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0009d2380)
	/usr/local/go/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc0009d2380)
	/usr/local/go/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc0014d01e0)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc0014d01e0)
	/usr/local/go/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0004f40f0, {0x130802f0, 0xc0014d01e0})
	/usr/local/go/src/net/http/server.go:3260 +0x33e
net/http.(*Server).ListenAndServe(0xc0004f40f0)
	/usr/local/go/src/net/http/server.go:3189 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc0006db6c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 775
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 3259 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x1308d3d0, 0xc000144060}, 0xc0008a9750, 0xc0008a9798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x1308d3d0, 0xc000144060}, 0x40?, 0xc0008a9750, 0xc0008a9798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x1308d3d0?, 0xc000144060?}, 0x10385016?, 0xc001b2ad80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0008a97d0?, 0xff799a4?, 0xc0005b2e40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3278
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3258 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001300710, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x12b51860?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00147c0c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001300740)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001d3f060, {0x13069620, 0xc002032b40}, 0x1, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001d3f060, 0x3b9aca00, 0x0, 0x1, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3278
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3386 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001c1f0e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3375
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2978 [chan receive, 13 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001e6e3c0, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2960
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2731 [chan receive]:
testing.(*T).Run(0xc0006dbba0, {0x119a0afa?, 0x0?}, 0xc001b09c00)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0006dbba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0006dbba0, 0xc0009949c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2728
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3618 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x1308d3d0, 0xc000144060}, 0xc001824f50, 0xc001824f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x1308d3d0, 0xc000144060}, 0x50?, 0xc001824f50, 0xc001824f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x1308d3d0?, 0xc000144060?}, 0x10000c0014bd6c0?, 0xff336a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001824fd0?, 0x10393565?, 0xc001626600?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3607
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1007 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001287c20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 915
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2800 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc001288c90, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x12b51860?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00204a360)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001288cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001d3e800, {0x13069620, 0xc00144e8d0}, 0x1, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001d3e800, 0x3b9aca00, 0x0, 0x1, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2814
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2217 [chan receive, 57 minutes]:
testing.(*T).Run(0xc00041f040, {0x1199f4aa?, 0x40817eb8f87?}, 0xc0015e0ae0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc00041f040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc00041f040, 0x1305d418)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2729 [chan receive, 9 minutes]:
testing.(*T).Run(0xc0006db860, {0x119a0afa?, 0x0?}, 0xc001d31d00)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0006db860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0006db860, 0xc0009947c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2728
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1410 [chan send, 104 minutes]:
os/exec.(*Cmd).watchCtx(0xc001b2b980, 0xc000144a80)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 902
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3912 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x1308d3d0, 0xc000144060}, 0xc001827f50, 0xc001827f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x1308d3d0, 0xc000144060}, 0x0?, 0xc001827f50, 0xc001827f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x1308d3d0?, 0xc000144060?}, 0x10385016?, 0xc00164a480?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001827fd0?, 0xff799a4?, 0xc001583e00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3893
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3606 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001b537a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3605
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2977 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0017092c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2960
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 1008 [chan receive, 106 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001300640, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 915
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3843 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0008ed610, 0x1)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x12b51860?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001c1e660)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0008ed640)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001b0aaf0, {0x13069620, 0xc001d37590}, 0x1, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001b0aaf0, 0x3b9aca00, 0x0, 0x1, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3860
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3278 [chan receive, 12 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001300740, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3273
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2976 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc001e6e390, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x12b51860?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001709140)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001e6e3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009fe250, {0x13069620, 0xc001c98270}, 0x1, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0009fe250, 0x3b9aca00, 0x0, 0x1, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2978
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 1457 [select, 104 minutes]:
net/http.(*persistConn).readLoop(0xc001cc7680)
	/usr/local/go/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 1450
	/usr/local/go/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 4033 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001300d50, 0x0)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x12b51860?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001c1e300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001300d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0014b6b20, {0x13069620, 0xc002089da0}, 0x1, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0014b6b20, 0x3b9aca00, 0x0, 0x1, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4022
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2275 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00159aba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2249
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2276 [chan receive, 57 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001300440, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2249
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2734 [chan receive, 17 minutes]:
testing.(*testContext).waitParallel(0xc00080c550)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00041fa00)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00041fa00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00041fa00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00041fa00, 0xc0009952c0)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2728
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2309 [chan receive, 7 minutes]:
testing.(*testContext).waitParallel(0xc00080c550)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1665 +0x5e9
testing.tRunner(0xc001b641a0, 0xc0015e0ae0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2217
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3617 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc000697b90, 0x10)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x12b51860?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001b53680)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000697bc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000802bb0, {0x13069620, 0xc001b04480}, 0x1, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000802bb0, 0x3b9aca00, 0x0, 0x1, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3607
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2918 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001300f80, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2916
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2911 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x1308d3d0, 0xc000144060}, 0xc000112f50, 0xc000112f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x1308d3d0, 0xc000144060}, 0x0?, 0xc000112f50, 0xc000112f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x1308d3d0?, 0xc000144060?}, 0xc001b656c0?, 0xff336a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000112fd0?, 0xff799a4?, 0xc001582f00?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2918
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2285 [chan receive, 17 minutes]:
testing.(*T).Run(0xc00041f860, {0x1199f4aa?, 0xff32d73?}, 0x1305d5c0)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc00041f860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc00041f860, 0x1305d460)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1018 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x1308d3d0, 0xc000144060}, 0xc000112f50, 0xc00089bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x1308d3d0, 0xc000144060}, 0x78?, 0xc000112f50, 0xc000112f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x1308d3d0?, 0xc000144060?}, 0xc0014bc680?, 0xff336a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000112fd0?, 0xff799a4?, 0xc001ac60f0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1008
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1017 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001300610, 0x2b)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x12b51860?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001287b00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001300640)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000a7f200, {0x13069620, 0xc00142afc0}, 0x1, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000a7f200, 0x3b9aca00, 0x0, 0x1, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 1008
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2813 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00204a480)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2809
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3505 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3488
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2291 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001300410, 0x1e)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x12b51860?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00159a960)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001300440)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00162e250, {0x13069620, 0xc00170a2d0}, 0x1, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00162e250, 0x3b9aca00, 0x0, 0x1, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2276
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2733 [chan receive, 17 minutes]:
testing.(*testContext).waitParallel(0xc00080c550)
	/usr/local/go/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc00041e9c0)
	/usr/local/go/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc00041e9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00041e9c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc00041e9c0, 0xc000994b00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2728
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2818 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2817
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3607 [chan receive, 9 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000697bc0, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3605
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3397 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3396
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2994 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2993
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1358 [chan send, 104 minutes]:
os/exec.(*Cmd).watchCtx(0xc001b2ad80, 0xc0016547e0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1357
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 3387 [chan receive, 11 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001300780, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3375
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2814 [chan receive, 15 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001288cc0, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2809
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2293 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2292
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1019 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1018
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3737 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x1308d3d0, 0xc000144060}, 0xc0008aa750, 0xc0008aa798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x1308d3d0, 0xc000144060}, 0x0?, 0xc0008aa750, 0xc0008aa798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x1308d3d0?, 0xc000144060?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0008aa7d0?, 0x103fbce5?, 0xc0017092c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3748
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2917 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001808300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2916
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3395 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001300690, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x12b51860?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001c1efc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001300780)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00088ea70, {0x13069620, 0xc001ad6510}, 0x1, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00088ea70, 0x3b9aca00, 0x0, 0x1, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3387
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3845 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3844
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3498 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00204a1e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3494
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2728 [chan receive, 17 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0006db520, 0x1305d5c0)
	/usr/local/go/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2285
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 1151 [chan send, 106 minutes]:
os/exec.(*Cmd).watchCtx(0xc001b2b080, 0xc001a1fec0)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1150
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 1320 [chan send, 104 minutes]:
os/exec.(*Cmd).watchCtx(0xc001366c00, 0xc001d98d20)
	/usr/local/go/src/os/exec/exec.go:793 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1319
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 2817 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x1308d3d0, 0xc000144060}, 0xc001b43f50, 0xc001b43f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x1308d3d0, 0xc000144060}, 0x40?, 0xc001b43f50, 0xc001b43f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x1308d3d0?, 0xc000144060?}, 0xc001b65860?, 0xff336a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001b43fd0?, 0xff799a4?, 0xc001582540?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2814
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2292 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x1308d3d0, 0xc000144060}, 0xc001b41750, 0xc0012e4f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x1308d3d0, 0xc000144060}, 0x20?, 0xc001b41750, 0xc001b41798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x1308d3d0?, 0xc000144060?}, 0xc00041eea0?, 0xff336a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x1308d1a0?, 0xc00068e6e0?, 0xc001d34001?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2276
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3487 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001d34590, 0x11)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x12b51860?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001611ec0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001d345c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000677cb0, {0x13069620, 0xc00144fa40}, 0x1, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000677cb0, 0x3b9aca00, 0x0, 0x1, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3499
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3948 [IO wait]:
internal/poll.runtime_pollWait(0x5bc99198, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001b53200?, 0xc001894d19?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001b53200, {0xc001894d19, 0x1b2e7, 0x1b2e7})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00009ad50, {0xc001894d19?, 0xc001b7d500?, 0x1fe25?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001d36ea0, {0x13068038, 0xc001970280})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x13068178, 0xc001d36ea0}, {0x13068038, 0xc001970280}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001326678?, {0x13068178, 0xc001d36ea0})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x143b1540?, {0x13068178?, 0xc001d36ea0?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x13068178, 0xc001d36ea0}, {0x130680f8, 0xc00009ad50}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0016555c0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3946
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 3178 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x1308d3d0, 0xc000144060}, 0xc000117f50, 0xc000117f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x1308d3d0, 0xc000144060}, 0xd0?, 0xc000117f50, 0xc000117f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x1308d3d0?, 0xc000144060?}, 0x10385016?, 0xc001b50d80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000117fd0?, 0xff799a4?, 0xc000a18210?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3160
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3499 [chan receive, 11 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001d345c0, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3494
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3488 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x1308d3d0, 0xc000144060}, 0xc001825f50, 0xc001825f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x1308d3d0, 0xc000144060}, 0xc0?, 0xc001825f50, 0xc001825f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x1308d3d0?, 0xc000144060?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001825fd0?, 0xff799a4?, 0xc0015826c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3499
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3396 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x1308d3d0, 0xc000144060}, 0xc000112750, 0xc000112798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x1308d3d0, 0xc000144060}, 0xc0?, 0xc000112750, 0xc000112798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x1308d3d0?, 0xc000144060?}, 0xc0014bc4e0?, 0xff336a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xff79945?, 0xc000003500?, 0xc0005b2fc0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3387
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3159 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0018092c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3173
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3160 [chan receive, 12 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00074e540, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3173
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3177 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00074e510, 0x12)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x12b51860?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001809020)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00074e540)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0014b6870, {0x13069620, 0xc001bf8720}, 0x1, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0014b6870, 0x3b9aca00, 0x0, 0x1, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3160
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3179 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3178
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3893 [chan receive, 5 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001580300, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3907
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3260 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3259
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3277 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00147c1e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3273
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3949 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc00164b680, 0xc001655680)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 3946
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                                
goroutine 4034 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x1308d3d0, 0xc000144060}, 0xc000115f50, 0xc000115f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x1308d3d0, 0xc000144060}, 0x38?, 0xc000115f50, 0xc000115f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x1308d3d0?, 0xc000144060?}, 0xc0014bda00?, 0xff336a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc000115fd0?, 0xff799a4?, 0xc001b09c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 4022
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 4012 [chan receive]:
testing.(*T).Run(0xc0014bd860, {0x1199e6d6?, 0x60400000004?}, 0xc001d30380)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc0014bd860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc0014bd860, 0xc001b09c00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2731
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3736 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001d34550, 0x10)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x12b51860?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00204a300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001d34600)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0006bb430, {0x13069620, 0xc00170b440}, 0x1, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0006bb430, 0x3b9aca00, 0x0, 0x1, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3748
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3748 [chan receive, 9 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001d34600, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3732
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3619 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3618
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3738 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3737
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3747 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00204a5a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3732
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3722 [chan receive, 5 minutes]:
testing.(*T).Run(0xc001b65ba0, {0x119ac61d?, 0x60400000004?}, 0xc001bbfa80)
	/usr/local/go/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc001b65ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc001b65ba0, 0xc001d31d00)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2729
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 4022 [chan receive]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001300d80, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3997
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 4035 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4034
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4039 [IO wait]:
internal/poll.runtime_pollWait(0x5bc990a0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc000699280?, 0xc001566000?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc000699280, {0xc001566000, 0x2000, 0x2000})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
net.(*netFD).Read(0xc000699280, {0xc001566000?, 0xc000a04f00?, 0x2?})
	/usr/local/go/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0019706d0, {0xc001566000?, 0xc001566005?, 0x1a?})
	/usr/local/go/src/net/net.go:185 +0x45
crypto/tls.(*atLeastReader).Read(0xc0015e0f18, {0xc001566000?, 0x0?, 0xc0015e0f18?})
	/usr/local/go/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc0008b49b0, {0x13069d60, 0xc0015e0f18})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc0008b4708, {0x5b7f2960, 0xc0014ff818}, 0xc001495980?)
	/usr/local/go/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc0008b4708, 0x0)
	/usr/local/go/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/usr/local/go/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc0008b4708, {0xc0014ab000, 0x1000, 0xc00182dc00?})
	/usr/local/go/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc001610240, {0xc000809380, 0x9, 0x143aca30?})
	/usr/local/go/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x13068218, 0xc001610240}, {0xc000809380, 0x9, 0x9}, 0x9)
	/usr/local/go/src/io/io.go:335 +0x90
io.ReadFull(...)
	/usr/local/go/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc000809380, 0x9, 0xc001495dc0?}, {0x13068218?, 0xc001610240?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc000809340)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/frame.go:501 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc001495fa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2354 +0xda
golang.org/x/net/http2.(*ClientConn).readLoop(0xc001b2a780)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:2250 +0x8b
created by golang.org/x/net/http2.(*Transport).newClientConn in goroutine 4038
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.27.0/http2/transport.go:865 +0xcfb

                                                
                                                
goroutine 4057 [syscall]:
syscall.syscall6(0xc00144ff80?, 0x1000000000010?, 0x10100000019?, 0x5b801a50?, 0x90?, 0x14d345b8?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc0018dbb28?, 0xfe000c5?, 0x90?, 0x12fc9980?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xfe0ced6?, 0xc0018dbb5c, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc00170dc50)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0016aec00)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc0016aec00)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc001b65860, 0xc0016aec00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateStop({0x1308d210?, 0xc0006dc2a0?}, 0xc001b65860, {0xc001c9cfa0?, 0x312ee1a0?}, {0x312ee1a001329758?, 0xc001329760?}, {0xff32d73?, 0xfe8adcf?}, {0xc001343400, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:228 +0x17b
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001b65860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001b65860, 0xc001d30380)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 4012
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3860 [chan receive, 7 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0008ed640, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3823
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3844 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x1308d3d0, 0xc000144060}, 0xc00182af50, 0xc00182af98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x1308d3d0, 0xc000144060}, 0xe0?, 0xc00182af50, 0xc00182af98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x1308d3d0?, 0xc000144060?}, 0xc0014bcb60?, 0xff336a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xff79945?, 0xc001d27c80?, 0xc0015827e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3860
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3859 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001c1e780)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3823
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3911 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0015802d0, 0x1)
	/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x12b51860?)
	/usr/local/go/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0019f88a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001580300)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00162e300, {0x13069620, 0xc001ad63c0}, 0x1, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00162e300, 0x3b9aca00, 0x0, 0x1, 0xc000144060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3893
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3892 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0019f89c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3907
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3913 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3912
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.3/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3947 [IO wait, 3 minutes]:
internal/poll.runtime_pollWait(0x5bdc21b0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc001b53140?, 0xc0014064d5?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc001b53140, {0xc0014064d5, 0x32b, 0x32b})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00009ad38, {0xc0014064d5?, 0xff77b3a?, 0x263?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc001d36e70, {0x13068038, 0xc001970278})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x13068178, 0xc001d36e70}, {0x13068038, 0xc001970278}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x14323a20?, {0x13068178, 0xc001d36e70})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x143b1540?, {0x13068178?, 0xc001d36e70?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x13068178, 0xc001d36e70}, {0x130680f8, 0xc00009ad38}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001bbfa80?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 3946
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 4021 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001c1e420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3997
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.3/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3946 [syscall, 5 minutes]:
syscall.syscall6(0xc001d37f80?, 0x1000000000010?, 0x10000000019?, 0x5bbb8ca8?, 0x90?, 0x14d34108?, 0x90?)
	/usr/local/go/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc0016c6b48?, 0xfe000c5?, 0x90?, 0x12fc9980?)
	/usr/local/go/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0xff309e5?, 0xc0016c6b7c, 0x0?, 0x0?)
	/usr/local/go/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc001ab4f60)
	/usr/local/go/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc00164b680)
	/usr/local/go/src/os/exec/exec.go:901 +0x45
os/exec.(*Cmd).Run(0xc00164b680)
	/usr/local/go/src/os/exec/exec.go:608 +0x2d
k8s.io/minikube/test/integration.Run(0xc001355040, 0xc00164b680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateSecondStart({0x1308d210, 0xc0002bf810}, 0xc001355040, {0xc00098d2d8, 0x16}, {0x1e96fa7801326f58?, 0xc001326f60?}, {0xff32d73?, 0xfe8adcf?}, {0xc001856480, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:256 +0xe5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc001355040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc001355040, 0xc001bbfa80)
	/usr/local/go/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3722
	/usr/local/go/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 4058 [IO wait]:
internal/poll.runtime_pollWait(0x5bc99578, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00159a600?, 0xc0015de434?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00159a600, {0xc0015de434, 0x3cc, 0x3cc})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00009b350, {0xc0015de434?, 0x10?, 0x34?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00144e000, {0x13068038, 0xc001970760})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x13068178, 0xc00144e000}, {0x13068038, 0xc001970760}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001b65860?, {0x13068178, 0xc00144e000})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x143b1540?, {0x13068178?, 0xc00144e000?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x13068178, 0xc00144e000}, {0x130680f8, 0xc00009b350}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc001d30380?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 4057
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 4059 [IO wait]:
internal/poll.runtime_pollWait(0x5bc98cc0, 0x72)
	/usr/local/go/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00159a6c0?, 0xc0015ef025?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc00159a6c0, {0xc0015ef025, 0xfdb, 0xfdb})
	/usr/local/go/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00009b368, {0xc0015ef025?, 0xc00021bdc0?, 0xe2b?})
	/usr/local/go/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00144e030, {0x13068038, 0xc001970768})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x13068178, 0xc00144e030}, {0x13068038, 0xc001970768}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc001b40678?, {0x13068178, 0xc00144e030})
	/usr/local/go/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x143b1540?, {0x13068178?, 0xc00144e030?})
	/usr/local/go/src/os/file.go:247 +0x49
io.copyBuffer({0x13068178, 0xc00144e030}, {0x130680f8, 0xc00009b368}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:578 +0x34
os/exec.(*Cmd).Start.func2(0xc0016554a0?)
	/usr/local/go/src/os/exec/exec.go:728 +0x2c
created by os/exec.(*Cmd).Start in goroutine 4057
	/usr/local/go/src/os/exec/exec.go:727 +0x9ae

                                                
                                                
goroutine 4060 [select]:
os/exec.(*Cmd).watchCtx(0xc0016aec00, 0xc001d07560)
	/usr/local/go/src/os/exec/exec.go:768 +0xb5
created by os/exec.(*Cmd).Start in goroutine 4057
	/usr/local/go/src/os/exec/exec.go:754 +0x976

                                                
                                    

Test pass (181/227)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 24.99
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.29
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.2
12 TestDownloadOnly/v1.30.3/json-events 11.55
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.29
18 TestDownloadOnly/v1.30.3/DeleteAll 0.23
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.21
21 TestDownloadOnly/v1.31.0-rc.0/json-events 16.45
22 TestDownloadOnly/v1.31.0-rc.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-rc.0/kubectl 0
26 TestDownloadOnly/v1.31.0-rc.0/LogsDuration 0.29
27 TestDownloadOnly/v1.31.0-rc.0/DeleteAll 0.23
28 TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds 0.21
30 TestBinaryMirror 1.01
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.17
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.19
36 TestAddons/Setup 233.19
38 TestAddons/serial/Volcano 43.24
40 TestAddons/serial/GCPAuth/Namespaces 0.1
42 TestAddons/parallel/Registry 14.12
43 TestAddons/parallel/Ingress 19.72
44 TestAddons/parallel/InspektorGadget 10.5
45 TestAddons/parallel/MetricsServer 5.47
46 TestAddons/parallel/HelmTiller 10.16
48 TestAddons/parallel/CSI 60.73
49 TestAddons/parallel/Headlamp 18.39
50 TestAddons/parallel/CloudSpanner 5.35
51 TestAddons/parallel/LocalPath 52.36
52 TestAddons/parallel/NvidiaDevicePlugin 5.34
53 TestAddons/parallel/Yakd 10.46
54 TestAddons/StoppedEnableDisable 5.9
62 TestHyperKitDriverInstallOrUpdate 8.32
65 TestErrorSpam/setup 36.18
66 TestErrorSpam/start 1.51
67 TestErrorSpam/status 0.49
68 TestErrorSpam/pause 1.34
69 TestErrorSpam/unpause 1.31
70 TestErrorSpam/stop 152.81
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 91.76
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 38.01
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.05
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.26
82 TestFunctional/serial/CacheCmd/cache/add_local 1.36
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
84 TestFunctional/serial/CacheCmd/cache/list 0.08
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.17
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.02
87 TestFunctional/serial/CacheCmd/cache/delete 0.16
88 TestFunctional/serial/MinikubeKubectlCmd 1.18
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.51
90 TestFunctional/serial/ExtraConfig 41.52
91 TestFunctional/serial/ComponentHealth 0.05
92 TestFunctional/serial/LogsCmd 2.66
93 TestFunctional/serial/LogsFileCmd 2.66
94 TestFunctional/serial/InvalidService 5.39
96 TestFunctional/parallel/ConfigCmd 0.5
97 TestFunctional/parallel/DashboardCmd 13.29
98 TestFunctional/parallel/DryRun 1.05
99 TestFunctional/parallel/InternationalLanguage 0.84
100 TestFunctional/parallel/StatusCmd 0.59
104 TestFunctional/parallel/ServiceCmdConnect 8.55
105 TestFunctional/parallel/AddonsCmd 0.22
106 TestFunctional/parallel/PersistentVolumeClaim 26.17
108 TestFunctional/parallel/SSHCmd 0.28
109 TestFunctional/parallel/CpCmd 1.06
110 TestFunctional/parallel/MySQL 25.6
111 TestFunctional/parallel/FileSync 0.23
112 TestFunctional/parallel/CertSync 1.1
116 TestFunctional/parallel/NodeLabels 0.05
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.16
120 TestFunctional/parallel/License 0.59
121 TestFunctional/parallel/Version/short 0.12
122 TestFunctional/parallel/Version/components 0.43
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.15
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.16
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.16
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.15
127 TestFunctional/parallel/ImageCommands/ImageBuild 2.57
128 TestFunctional/parallel/ImageCommands/Setup 1.84
129 TestFunctional/parallel/DockerEnv/bash 0.62
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.01
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.64
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.46
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.39
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.41
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.7
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.41
140 TestFunctional/parallel/ServiceCmd/DeployApp 22.13
142 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.37
143 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.23
146 TestFunctional/parallel/ServiceCmd/List 0.41
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.37
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.25
149 TestFunctional/parallel/ServiceCmd/Format 0.25
150 TestFunctional/parallel/ServiceCmd/URL 0.24
151 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
152 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
153 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.04
154 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.03
155 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
156 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.14
157 TestFunctional/parallel/ProfileCmd/profile_not_create 0.26
158 TestFunctional/parallel/ProfileCmd/profile_list 0.26
159 TestFunctional/parallel/ProfileCmd/profile_json_output 0.27
160 TestFunctional/parallel/MountCmd/any-port 6.09
161 TestFunctional/parallel/MountCmd/specific-port 1.76
162 TestFunctional/parallel/MountCmd/VerifyCleanup 2.24
163 TestFunctional/delete_echo-server_images 0.04
164 TestFunctional/delete_my-image_image 0.02
165 TestFunctional/delete_minikube_cached_images 0.02
169 TestMultiControlPlane/serial/StartCluster 216.38
170 TestMultiControlPlane/serial/DeployApp 6.23
171 TestMultiControlPlane/serial/PingHostFromPods 1.34
172 TestMultiControlPlane/serial/AddWorkerNode 49.66
173 TestMultiControlPlane/serial/NodeLabels 0.05
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.31
175 TestMultiControlPlane/serial/CopyFile 8.87
176 TestMultiControlPlane/serial/StopSecondaryNode 8.69
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.26
178 TestMultiControlPlane/serial/RestartSecondaryNode 42.34
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.34
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.26
183 TestMultiControlPlane/serial/StopCluster 24.94
190 TestImageBuild/serial/Setup 38.32
191 TestImageBuild/serial/NormalBuild 1.58
192 TestImageBuild/serial/BuildWithBuildArg 0.72
193 TestImageBuild/serial/BuildWithDockerIgnore 0.52
194 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.54
198 TestJSONOutput/start/Command 89.92
199 TestJSONOutput/start/Audit 0
201 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/pause/Command 0.48
205 TestJSONOutput/pause/Audit 0
207 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
210 TestJSONOutput/unpause/Command 0.47
211 TestJSONOutput/unpause/Audit 0
213 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
214 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
216 TestJSONOutput/stop/Command 8.33
217 TestJSONOutput/stop/Audit 0
219 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
220 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
221 TestErrorJSONOutput 0.57
226 TestMainNoArgs 0.08
227 TestMinikubeProfile 87.03
237 TestMultiNode/serial/MultiNodeLabels 0.05
238 TestMultiNode/serial/ProfileList 0.17
244 TestMultiNode/serial/StopMultiNode 16.77
246 TestMultiNode/serial/ValidateNameConflict 44.54
250 TestPreload 136.54
253 TestSkaffold 111.38
256 TestRunningBinaryUpgrade 101.8
258 TestKubernetesUpgrade 1364.74
271 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.49
272 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 7.07
273 TestStoppedBinaryUpgrade/Setup 1.51
274 TestStoppedBinaryUpgrade/Upgrade 142.01
277 TestStoppedBinaryUpgrade/MinikubeLogs 2.58
286 TestNoKubernetes/serial/StartNoK8sWithVersion 0.47
287 TestNoKubernetes/serial/StartWithK8s 69.73
289 TestNoKubernetes/serial/StartWithStopK8s 17.94
290 TestNoKubernetes/serial/Start 21.9
291 TestNoKubernetes/serial/VerifyK8sNotRunning 0.12
292 TestNoKubernetes/serial/ProfileList 0.47
293 TestNoKubernetes/serial/Stop 2.4
294 TestNoKubernetes/serial/StartNoArgs 19.63
295 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.13
x
+
TestDownloadOnly/v1.20.0/json-events (24.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-715000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-715000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit : (24.988652779s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (24.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-715000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-715000: exit status 85 (286.132829ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-715000 | jenkins | v1.33.1 | 05 Aug 24 15:47 PDT |          |
	|         | -p download-only-715000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 15:47:00
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 15:47:00.326998    1680 out.go:291] Setting OutFile to fd 1 ...
	I0805 15:47:00.327191    1680 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 15:47:00.327197    1680 out.go:304] Setting ErrFile to fd 2...
	I0805 15:47:00.327200    1680 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 15:47:00.327377    1680 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	W0805 15:47:00.327471    1680 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19373-1122/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19373-1122/.minikube/config/config.json: no such file or directory
	I0805 15:47:00.329978    1680 out.go:298] Setting JSON to true
	I0805 15:47:00.353572    1680 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":991,"bootTime":1722897029,"procs":432,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0805 15:47:00.353668    1680 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 15:47:00.376470    1680 out.go:97] [download-only-715000] minikube v1.33.1 on Darwin 14.5
	I0805 15:47:00.376724    1680 notify.go:220] Checking for updates...
	W0805 15:47:00.376753    1680 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball: no such file or directory
	I0805 15:47:00.398061    1680 out.go:169] MINIKUBE_LOCATION=19373
	I0805 15:47:00.421049    1680 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 15:47:00.451187    1680 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0805 15:47:00.472077    1680 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 15:47:00.493307    1680 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	W0805 15:47:00.554242    1680 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0805 15:47:00.554723    1680 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 15:47:00.606076    1680 out.go:97] Using the hyperkit driver based on user configuration
	I0805 15:47:00.606133    1680 start.go:297] selected driver: hyperkit
	I0805 15:47:00.606145    1680 start.go:901] validating driver "hyperkit" against <nil>
	I0805 15:47:00.606365    1680 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 15:47:00.606742    1680 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19373-1122/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0805 15:47:00.798069    1680 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0805 15:47:00.802937    1680 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 15:47:00.802958    1680 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0805 15:47:00.802987    1680 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 15:47:00.807063    1680 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0805 15:47:00.807255    1680 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 15:47:00.807309    1680 cni.go:84] Creating CNI manager for ""
	I0805 15:47:00.807325    1680 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0805 15:47:00.807400    1680 start.go:340] cluster config:
	{Name:download-only-715000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-715000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 15:47:00.807656    1680 iso.go:125] acquiring lock: {Name:mk71e8d40232ece83c91dc82184f03ab93aee56e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 15:47:00.828849    1680 out.go:97] Downloading VM boot image ...
	I0805 15:47:00.828984    1680 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/iso/amd64/minikube-v1.33.1-1722248113-19339-amd64.iso
	I0805 15:47:10.280281    1680 out.go:97] Starting "download-only-715000" primary control-plane node in "download-only-715000" cluster
	I0805 15:47:10.280339    1680 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0805 15:47:10.334264    1680 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0805 15:47:10.334324    1680 cache.go:56] Caching tarball of preloaded images
	I0805 15:47:10.349280    1680 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0805 15:47:10.370973    1680 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0805 15:47:10.371005    1680 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0805 15:47:10.453085    1680 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0805 15:47:18.830314    1680 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0805 15:47:18.830524    1680 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0805 15:47:19.375521    1680 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0805 15:47:19.375787    1680 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/download-only-715000/config.json ...
	I0805 15:47:19.375810    1680 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/download-only-715000/config.json: {Name:mk620756ffc7f33bf5748d17e12230a5c92f03b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 15:47:19.376132    1680 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0805 15:47:19.376495    1680 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/darwin/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-715000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-715000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-715000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (11.55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-656000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-656000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=hyperkit : (11.546183826s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (11.55s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-656000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-656000: exit status 85 (289.922903ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-715000 | jenkins | v1.33.1 | 05 Aug 24 15:47 PDT |                     |
	|         | -p download-only-715000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 05 Aug 24 15:47 PDT | 05 Aug 24 15:47 PDT |
	| delete  | -p download-only-715000        | download-only-715000 | jenkins | v1.33.1 | 05 Aug 24 15:47 PDT | 05 Aug 24 15:47 PDT |
	| start   | -o=json --download-only        | download-only-656000 | jenkins | v1.33.1 | 05 Aug 24 15:47 PDT |                     |
	|         | -p download-only-656000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 15:47:26
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 15:47:26.036256    1713 out.go:291] Setting OutFile to fd 1 ...
	I0805 15:47:26.036506    1713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 15:47:26.036511    1713 out.go:304] Setting ErrFile to fd 2...
	I0805 15:47:26.036515    1713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 15:47:26.036697    1713 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 15:47:26.038229    1713 out.go:298] Setting JSON to true
	I0805 15:47:26.060448    1713 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1017,"bootTime":1722897029,"procs":434,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0805 15:47:26.060535    1713 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 15:47:26.082497    1713 out.go:97] [download-only-656000] minikube v1.33.1 on Darwin 14.5
	I0805 15:47:26.082706    1713 notify.go:220] Checking for updates...
	I0805 15:47:26.104082    1713 out.go:169] MINIKUBE_LOCATION=19373
	I0805 15:47:26.125224    1713 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 15:47:26.148130    1713 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0805 15:47:26.169276    1713 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 15:47:26.190244    1713 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	W0805 15:47:26.231871    1713 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0805 15:47:26.232412    1713 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 15:47:26.262266    1713 out.go:97] Using the hyperkit driver based on user configuration
	I0805 15:47:26.262314    1713 start.go:297] selected driver: hyperkit
	I0805 15:47:26.262325    1713 start.go:901] validating driver "hyperkit" against <nil>
	I0805 15:47:26.262527    1713 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 15:47:26.262849    1713 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19373-1122/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0805 15:47:26.272525    1713 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0805 15:47:26.276247    1713 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 15:47:26.276267    1713 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0805 15:47:26.276297    1713 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 15:47:26.278895    1713 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0805 15:47:26.279072    1713 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 15:47:26.279098    1713 cni.go:84] Creating CNI manager for ""
	I0805 15:47:26.279117    1713 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 15:47:26.279125    1713 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 15:47:26.279197    1713 start.go:340] cluster config:
	{Name:download-only-656000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-656000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 15:47:26.279284    1713 iso.go:125] acquiring lock: {Name:mk71e8d40232ece83c91dc82184f03ab93aee56e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 15:47:26.300229    1713 out.go:97] Starting "download-only-656000" primary control-plane node in "download-only-656000" cluster
	I0805 15:47:26.300267    1713 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 15:47:26.357617    1713 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0805 15:47:26.357649    1713 cache.go:56] Caching tarball of preloaded images
	I0805 15:47:26.358121    1713 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 15:47:26.379427    1713 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0805 15:47:26.379444    1713 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0805 15:47:26.461285    1713 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4?checksum=md5:6304692df2fe6f7b0bdd7f93d160be8c -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0805 15:47:31.771151    1713 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0805 15:47:31.771338    1713 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0805 15:47:32.254390    1713 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0805 15:47:32.254625    1713 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/download-only-656000/config.json ...
	I0805 15:47:32.254648    1713 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/download-only-656000/config.json: {Name:mk4b2c564505d66304cdb12931a83b955e9d0d9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 15:47:32.254982    1713 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0805 15:47:32.255250    1713 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/darwin/amd64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-656000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-656000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-656000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/json-events (16.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-516000 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-516000 --force --alsologtostderr --kubernetes-version=v1.31.0-rc.0 --container-runtime=docker --driver=hyperkit : (16.450835196s)
--- PASS: TestDownloadOnly/v1.31.0-rc.0/json-events (16.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-516000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-516000: exit status 85 (292.248638ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-715000 | jenkins | v1.33.1 | 05 Aug 24 15:47 PDT |                     |
	|         | -p download-only-715000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=hyperkit                 |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 05 Aug 24 15:47 PDT | 05 Aug 24 15:47 PDT |
	| delete  | -p download-only-715000           | download-only-715000 | jenkins | v1.33.1 | 05 Aug 24 15:47 PDT | 05 Aug 24 15:47 PDT |
	| start   | -o=json --download-only           | download-only-656000 | jenkins | v1.33.1 | 05 Aug 24 15:47 PDT |                     |
	|         | -p download-only-656000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=hyperkit                 |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.33.1 | 05 Aug 24 15:47 PDT | 05 Aug 24 15:47 PDT |
	| delete  | -p download-only-656000           | download-only-656000 | jenkins | v1.33.1 | 05 Aug 24 15:47 PDT | 05 Aug 24 15:47 PDT |
	| start   | -o=json --download-only           | download-only-516000 | jenkins | v1.33.1 | 05 Aug 24 15:47 PDT |                     |
	|         | -p download-only-516000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-rc.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=hyperkit                 |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/05 15:47:38
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0805 15:47:38.310414    1741 out.go:291] Setting OutFile to fd 1 ...
	I0805 15:47:38.311022    1741 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 15:47:38.311031    1741 out.go:304] Setting ErrFile to fd 2...
	I0805 15:47:38.311037    1741 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 15:47:38.311577    1741 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 15:47:38.313086    1741 out.go:298] Setting JSON to true
	I0805 15:47:38.335770    1741 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1029,"bootTime":1722897029,"procs":423,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0805 15:47:38.335879    1741 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 15:47:38.357273    1741 out.go:97] [download-only-516000] minikube v1.33.1 on Darwin 14.5
	I0805 15:47:38.357497    1741 notify.go:220] Checking for updates...
	I0805 15:47:38.379022    1741 out.go:169] MINIKUBE_LOCATION=19373
	I0805 15:47:38.400170    1741 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 15:47:38.420849    1741 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0805 15:47:38.441937    1741 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 15:47:38.463228    1741 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	W0805 15:47:38.504990    1741 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0805 15:47:38.505510    1741 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 15:47:38.535234    1741 out.go:97] Using the hyperkit driver based on user configuration
	I0805 15:47:38.535311    1741 start.go:297] selected driver: hyperkit
	I0805 15:47:38.535358    1741 start.go:901] validating driver "hyperkit" against <nil>
	I0805 15:47:38.535590    1741 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 15:47:38.535814    1741 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19373-1122/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0805 15:47:38.545614    1741 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0805 15:47:38.549539    1741 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 15:47:38.549564    1741 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0805 15:47:38.549595    1741 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0805 15:47:38.552263    1741 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0805 15:47:38.552413    1741 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0805 15:47:38.552467    1741 cni.go:84] Creating CNI manager for ""
	I0805 15:47:38.552481    1741 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0805 15:47:38.552495    1741 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0805 15:47:38.552563    1741 start.go:340] cluster config:
	{Name:download-only-516000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-rc.0 ClusterName:download-only-516000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-rc.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 15:47:38.552654    1741 iso.go:125] acquiring lock: {Name:mk71e8d40232ece83c91dc82184f03ab93aee56e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0805 15:47:38.573913    1741 out.go:97] Starting "download-only-516000" primary control-plane node in "download-only-516000" cluster
	I0805 15:47:38.573949    1741 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0805 15:47:38.649230    1741 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4
	I0805 15:47:38.649298    1741 cache.go:56] Caching tarball of preloaded images
	I0805 15:47:38.649753    1741 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0805 15:47:38.671296    1741 out.go:97] Downloading Kubernetes v1.31.0-rc.0 preload ...
	I0805 15:47:38.671313    1741 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0805 15:47:38.750281    1741 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-rc.0/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4?checksum=md5:214beb6d5aadd59deaf940ce47a22f04 -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4
	I0805 15:47:47.656511    1741 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0805 15:47:47.656698    1741 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-rc.0-docker-overlay2-amd64.tar.lz4 ...
	I0805 15:47:48.121982    1741 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-rc.0 on docker
	I0805 15:47:48.122218    1741 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/download-only-516000/config.json ...
	I0805 15:47:48.122242    1741 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/download-only-516000/config.json: {Name:mk7cb4fa67d0c7e781fbd2cff1d56d567c5874eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0805 15:47:48.122591    1741 preload.go:131] Checking if preload exists for k8s version v1.31.0-rc.0 and runtime docker
	I0805 15:47:48.122863    1741 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-rc.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-rc.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19373-1122/.minikube/cache/darwin/amd64/v1.31.0-rc.0/kubectl
	
	
	* The control-plane node download-only-516000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-516000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-rc.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-516000
--- PASS: TestDownloadOnly/v1.31.0-rc.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestBinaryMirror (1.01s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-235000 --alsologtostderr --binary-mirror http://127.0.0.1:49651 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-235000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-235000
--- PASS: TestBinaryMirror (1.01s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.17s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-871000
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-871000: exit status 85 (167.022255ms)

                                                
                                                
-- stdout --
	* Profile "addons-871000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-871000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.17s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-871000
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-871000: exit status 85 (188.277101ms)

                                                
                                                
-- stdout --
	* Profile "addons-871000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-871000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/Setup (233.19s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-871000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-darwin-amd64 start -p addons-871000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m53.186314688s)
--- PASS: TestAddons/Setup (233.19s)

                                                
                                    
x
+
TestAddons/serial/Volcano (43.24s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 11.154796ms
addons_test.go:905: volcano-admission stabilized in 11.197702ms
addons_test.go:897: volcano-scheduler stabilized in 11.336784ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-k2t9q" [eecea4cd-8e8a-4c79-a7e5-6df8921a7986] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.00262685s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-gql5d" [a5caf37a-c4b5-49d1-a6da-d0efefa22151] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.002958066s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-4dj2d" [1a6287f3-cd27-4459-b972-56a4958228fd] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.00213317s
addons_test.go:932: (dbg) Run:  kubectl --context addons-871000 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-871000 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-871000 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [98e71a88-9a36-483c-bea7-b25cbcd226ec] Pending
helpers_test.go:344: "test-job-nginx-0" [98e71a88-9a36-483c-bea7-b25cbcd226ec] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [98e71a88-9a36-483c-bea7-b25cbcd226ec] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 18.003154843s
addons_test.go:968: (dbg) Run:  out/minikube-darwin-amd64 -p addons-871000 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-darwin-amd64 -p addons-871000 addons disable volcano --alsologtostderr -v=1: (9.932143908s)
--- PASS: TestAddons/serial/Volcano (43.24s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-871000 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-871000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 1.351906ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-698f998955-ffzbp" [dac852b5-8bb9-47ec-b948-e11925836ddf] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004844828s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-qb58m" [5327b51a-1b58-4e72-8817-19e444f5e17d] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004243188s
addons_test.go:342: (dbg) Run:  kubectl --context addons-871000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-871000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-871000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.444467924s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p addons-871000 ip
2024/08/05 15:53:05 [DEBUG] GET http://192.169.0.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 -p addons-871000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.12s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-871000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-871000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-871000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [01763b9a-9859-4bcc-95a8-cd5aa9d8945d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [01763b9a-9859-4bcc-95a8-cd5aa9d8945d] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004746292s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 -p addons-871000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-871000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 -p addons-871000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.169.0.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p addons-871000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-darwin-amd64 -p addons-871000 addons disable ingress-dns --alsologtostderr -v=1: (1.367485336s)
addons_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 -p addons-871000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-amd64 -p addons-871000 addons disable ingress --alsologtostderr -v=1: (7.4339313s)
--- PASS: TestAddons/parallel/Ingress (19.72s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.5s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-tm8d2" [f20866d2-e387-495f-9883-0f0b1aff2308] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004474777s
addons_test.go:851: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-871000
addons_test.go:851: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-871000: (5.491728745s)
--- PASS: TestAddons/parallel/InspektorGadget (10.50s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.832337ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-s2cj8" [6c6ed7bc-9557-4aae-a2af-3169d84f5cfe] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004859126s
addons_test.go:417: (dbg) Run:  kubectl --context addons-871000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-amd64 -p addons-871000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.47s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.16s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 1.847595ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-ds2tg" [3fb22008-8ae4-47bc-b4a3-a4c6021df3cc] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004166077s
addons_test.go:475: (dbg) Run:  kubectl --context addons-871000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-871000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.749724859s)
addons_test.go:492: (dbg) Run:  out/minikube-darwin-amd64 -p addons-871000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.16s)

                                                
                                    
x
+
TestAddons/parallel/CSI (60.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 5.399533ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-871000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-871000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [5b37fe15-20fe-4893-9d8a-5ea18ccd0ee1] Pending
helpers_test.go:344: "task-pv-pod" [5b37fe15-20fe-4893-9d8a-5ea18ccd0ee1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [5b37fe15-20fe-4893-9d8a-5ea18ccd0ee1] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.004011317s
addons_test.go:590: (dbg) Run:  kubectl --context addons-871000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-871000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-871000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-871000 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-871000 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-871000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-871000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d43eff66-30ab-4b2d-96f7-638d266091d8] Pending
helpers_test.go:344: "task-pv-pod-restore" [d43eff66-30ab-4b2d-96f7-638d266091d8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d43eff66-30ab-4b2d-96f7-638d266091d8] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.003981471s
addons_test.go:632: (dbg) Run:  kubectl --context addons-871000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-871000 delete pod task-pv-pod-restore: (1.287528194s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-871000 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-871000 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-darwin-amd64 -p addons-871000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-darwin-amd64 -p addons-871000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.439070321s)
addons_test.go:648: (dbg) Run:  out/minikube-darwin-amd64 -p addons-871000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (60.73s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-871000 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-9d868696f-brknm" [94454db2-0782-40f2-8095-68143dbc3e12] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-9d868696f-brknm" [94454db2-0782-40f2-8095-68143dbc3e12] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004667869s
addons_test.go:839: (dbg) Run:  out/minikube-darwin-amd64 -p addons-871000 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-darwin-amd64 -p addons-871000 addons disable headlamp --alsologtostderr -v=1: (5.440999921s)
--- PASS: TestAddons/parallel/Headlamp (18.39s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.35s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5455fb9b69-6skdm" [536c372f-0ab1-49b9-8ea1-fa1bbba2169e] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003334978s
addons_test.go:870: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-871000
--- PASS: TestAddons/parallel/CloudSpanner (5.35s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.36s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-871000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-871000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-871000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [9bacce7d-53a1-4ef0-a67e-271834d97e50] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [9bacce7d-53a1-4ef0-a67e-271834d97e50] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [9bacce7d-53a1-4ef0-a67e-271834d97e50] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.005543018s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-871000 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-darwin-amd64 -p addons-871000 ssh "cat /opt/local-path-provisioner/pvc-f704f022-5e15-494c-98b7-77416d1716a0_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-871000 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-871000 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-amd64 -p addons-871000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-darwin-amd64 -p addons-871000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.734637882s)
--- PASS: TestAddons/parallel/LocalPath (52.36s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.34s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-dqbqw" [543f2ebb-447b-4bd7-8a96-085891e40c8b] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004292243s
addons_test.go:1064: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-871000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.34s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-czj42" [217e2e58-0bee-4739-8492-8fb55ea0e7a5] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003565083s
addons_test.go:1076: (dbg) Run:  out/minikube-darwin-amd64 -p addons-871000 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-darwin-amd64 -p addons-871000 addons disable yakd --alsologtostderr -v=1: (5.458300261s)
--- PASS: TestAddons/parallel/Yakd (10.46s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.9s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-871000
addons_test.go:174: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-871000: (5.372599595s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-871000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-871000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-871000
--- PASS: TestAddons/StoppedEnableDisable (5.90s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.32s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.32s)

                                                
                                    
x
+
TestErrorSpam/setup (36.18s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-247000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-247000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-247000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-247000 --driver=hyperkit : (36.177726932s)
--- PASS: TestErrorSpam/setup (36.18s)

                                                
                                    
x
+
TestErrorSpam/start (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-247000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-247000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-247000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-247000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-247000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-247000 start --dry-run
--- PASS: TestErrorSpam/start (1.51s)

                                                
                                    
x
+
TestErrorSpam/status (0.49s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-247000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-247000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-247000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-247000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-247000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-247000 status
--- PASS: TestErrorSpam/status (0.49s)

                                                
                                    
x
+
TestErrorSpam/pause (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-247000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-247000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-247000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-247000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-247000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-247000 pause
--- PASS: TestErrorSpam/pause (1.34s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.31s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-247000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-247000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-247000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-247000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-247000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-247000 unpause
--- PASS: TestErrorSpam/unpause (1.31s)

                                                
                                    
x
+
TestErrorSpam/stop (152.81s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-247000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-247000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-247000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-247000 stop: (2.382941106s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-247000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-247000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-247000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-247000 stop: (1m15.201826423s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-247000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-247000 stop
E0805 15:56:50.463856    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
E0805 15:56:50.471984    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
E0805 15:56:50.482766    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
E0805 15:56:50.504080    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
E0805 15:56:50.544495    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
E0805 15:56:50.625046    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
E0805 15:56:50.787243    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
E0805 15:56:51.107722    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
E0805 15:56:51.747973    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
E0805 15:56:53.029240    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
E0805 15:56:55.590218    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
E0805 15:57:00.710390    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
E0805 15:57:10.950906    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
E0805 15:57:31.432450    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-amd64 -p nospam-247000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-247000 stop: (1m15.222696815s)
--- PASS: TestErrorSpam/stop (152.81s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19373-1122/.minikube/files/etc/test/nested/copy/1678/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (91.76s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-558000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
E0805 15:58:12.393084    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
E0805 15:59:34.313469    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-558000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (1m31.759228805s)
--- PASS: TestFunctional/serial/StartWithProxy (91.76s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (38.01s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-558000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-558000 --alsologtostderr -v=8: (38.00688734s)
functional_test.go:659: soft start took 38.007352417s for "functional-558000" cluster.
--- PASS: TestFunctional/serial/SoftStart (38.01s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-558000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-558000 cache add registry.k8s.io/pause:3.1: (1.222043884s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-558000 cache add registry.k8s.io/pause:3.3: (1.082523584s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-558000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialCacheCmdcacheadd_local3482160262/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 cache add minikube-local-cache-test:functional-558000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 cache delete minikube-local-cache-test:functional-558000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-558000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.02s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-558000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (139.683854ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.02s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 kubectl -- --context functional-558000 get pods
functional_test.go:712: (dbg) Done: out/minikube-darwin-amd64 -p functional-558000 kubectl -- --context functional-558000 get pods: (1.181276381s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-558000 get pods
functional_test.go:737: (dbg) Done: out/kubectl --context functional-558000 get pods: (1.509836804s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.51s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.52s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-558000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-558000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.51644714s)
functional_test.go:757: restart took 41.51661779s for "functional-558000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.52s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-558000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-558000 logs: (2.656621084s)
--- PASS: TestFunctional/serial/LogsCmd (2.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd1252883528/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-558000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd1252883528/001/logs.txt: (2.657985385s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.66s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.39s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-558000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-558000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-558000: exit status 115 (276.64365ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://192.169.0.4:30289 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-558000 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-558000 delete -f testdata/invalidsvc.yaml: (1.98603291s)
--- PASS: TestFunctional/serial/InvalidService (5.39s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-558000 config get cpus: exit status 14 (70.446687ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-558000 config get cpus: exit status 14 (55.248659ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-558000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-558000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3340: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.29s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-558000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-558000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (569.275578ms)

                                                
                                                
-- stdout --
	* [functional-558000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:02:13.563098    3257 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:02:13.563356    3257 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:02:13.563362    3257 out.go:304] Setting ErrFile to fd 2...
	I0805 16:02:13.563365    3257 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:02:13.563523    3257 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:02:13.565011    3257 out.go:298] Setting JSON to false
	I0805 16:02:13.587357    3257 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1904,"bootTime":1722897029,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0805 16:02:13.587441    3257 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:02:13.608732    3257 out.go:177] * [functional-558000] minikube v1.33.1 on Darwin 14.5
	I0805 16:02:13.650613    3257 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:02:13.650659    3257 notify.go:220] Checking for updates...
	I0805 16:02:13.692600    3257 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:02:13.713573    3257 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0805 16:02:13.735354    3257 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:02:13.777545    3257 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:02:13.835544    3257 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:02:13.856964    3257 config.go:182] Loaded profile config "functional-558000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:02:13.857309    3257 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:02:13.857372    3257 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:02:13.866546    3257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50845
	I0805 16:02:13.866994    3257 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:02:13.867577    3257 main.go:141] libmachine: Using API Version  1
	I0805 16:02:13.867601    3257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:02:13.867944    3257 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:02:13.868131    3257 main.go:141] libmachine: (functional-558000) Calling .DriverName
	I0805 16:02:13.868350    3257 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:02:13.868641    3257 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:02:13.868669    3257 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:02:13.877534    3257 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50850
	I0805 16:02:13.877910    3257 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:02:13.878292    3257 main.go:141] libmachine: Using API Version  1
	I0805 16:02:13.878313    3257 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:02:13.878540    3257 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:02:13.878656    3257 main.go:141] libmachine: (functional-558000) Calling .DriverName
	I0805 16:02:13.907549    3257 out.go:177] * Using the hyperkit driver based on existing profile
	I0805 16:02:13.949736    3257 start.go:297] selected driver: hyperkit
	I0805 16:02:13.949764    3257 start.go:901] validating driver "hyperkit" against &{Name:functional-558000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.3 ClusterName:functional-558000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:02:13.949966    3257 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:02:13.993624    3257 out.go:177] 
	W0805 16:02:14.014580    3257 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0805 16:02:14.035333    3257 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-558000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-558000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-558000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (843.867603ms)

                                                
                                                
-- stdout --
	* [functional-558000] minikube v1.33.1 sur Darwin 14.5
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:02:14.598996    3288 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:02:14.599160    3288 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:02:14.599165    3288 out.go:304] Setting ErrFile to fd 2...
	I0805 16:02:14.599169    3288 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:02:14.599375    3288 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:02:14.600945    3288 out.go:298] Setting JSON to false
	I0805 16:02:14.624264    3288 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1905,"bootTime":1722897029,"procs":487,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0805 16:02:14.624361    3288 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0805 16:02:14.645704    3288 out.go:177] * [functional-558000] minikube v1.33.1 sur Darwin 14.5
	I0805 16:02:14.687789    3288 notify.go:220] Checking for updates...
	I0805 16:02:14.708606    3288 out.go:177]   - MINIKUBE_LOCATION=19373
	I0805 16:02:14.729554    3288 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	I0805 16:02:14.771783    3288 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0805 16:02:14.813320    3288 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0805 16:02:14.855700    3288 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	I0805 16:02:14.898565    3288 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0805 16:02:14.919951    3288 config.go:182] Loaded profile config "functional-558000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:02:14.920350    3288 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:02:14.920394    3288 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:02:14.929373    3288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50891
	I0805 16:02:14.929773    3288 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:02:14.930221    3288 main.go:141] libmachine: Using API Version  1
	I0805 16:02:14.930249    3288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:02:14.930472    3288 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:02:14.930599    3288 main.go:141] libmachine: (functional-558000) Calling .DriverName
	I0805 16:02:14.930785    3288 driver.go:392] Setting default libvirt URI to qemu:///system
	I0805 16:02:14.931043    3288 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:02:14.931079    3288 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:02:14.939575    3288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50896
	I0805 16:02:14.939899    3288 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:02:14.940217    3288 main.go:141] libmachine: Using API Version  1
	I0805 16:02:14.940239    3288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:02:14.940473    3288 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:02:14.940602    3288 main.go:141] libmachine: (functional-558000) Calling .DriverName
	I0805 16:02:15.018370    3288 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I0805 16:02:15.080513    3288 start.go:297] selected driver: hyperkit
	I0805 16:02:15.080532    3288 start.go:901] validating driver "hyperkit" against &{Name:functional-558000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19339/minikube-v1.33.1-1722248113-19339-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.3 ClusterName:functional-558000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0805 16:02:15.080812    3288 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0805 16:02:15.206508    3288 out.go:177] 
	W0805 16:02:15.290570    3288 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0805 16:02:15.353562    3288 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-558000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-558000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-v8pvx" [414641e9-20b4-40a1-b814-a571a78b3e61] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-v8pvx" [414641e9-20b4-40a1-b814-a571a78b3e61] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.00489459s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.169.0.4:31298
functional_test.go:1671: http://192.169.0.4:31298: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-v8pvx

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.169.0.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.169.0.4:31298
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [473b6fcb-9473-40a2-aca1-8dea61252d83] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004694109s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-558000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-558000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-558000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-558000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c3b20d8d-1600-41a0-8de2-0e2ea74f0525] Pending
helpers_test.go:344: "sp-pod" [c3b20d8d-1600-41a0-8de2-0e2ea74f0525] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c3b20d8d-1600-41a0-8de2-0e2ea74f0525] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.002498084s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-558000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-558000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-558000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [0acbbb7d-2f13-458b-93ea-68da919e004e] Pending
helpers_test.go:344: "sp-pod" [0acbbb7d-2f13-458b-93ea-68da919e004e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [0acbbb7d-2f13-458b-93ea-68da919e004e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004787769s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-558000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.17s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh -n functional-558000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 cp functional-558000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelCpCmd1459448652/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh -n functional-558000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh -n functional-558000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-558000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-f7d7f" [0e70e0ad-0bb2-4738-92e7-5776a5035664] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-f7d7f" [0e70e0ad-0bb2-4738-92e7-5776a5035664] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.002506447s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-558000 exec mysql-64454c8b5c-f7d7f -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-558000 exec mysql-64454c8b5c-f7d7f -- mysql -ppassword -e "show databases;": exit status 1 (171.645516ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-558000 exec mysql-64454c8b5c-f7d7f -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-558000 exec mysql-64454c8b5c-f7d7f -- mysql -ppassword -e "show databases;": exit status 1 (119.685824ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-558000 exec mysql-64454c8b5c-f7d7f -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-558000 exec mysql-64454c8b5c-f7d7f -- mysql -ppassword -e "show databases;": exit status 1 (104.451469ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-558000 exec mysql-64454c8b5c-f7d7f -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.60s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1678/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "sudo cat /etc/test/nested/copy/1678/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1678.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "sudo cat /etc/ssl/certs/1678.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1678.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "sudo cat /usr/share/ca-certificates/1678.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/16782.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "sudo cat /etc/ssl/certs/16782.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/16782.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "sudo cat /usr/share/ca-certificates/16782.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-558000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-558000 ssh "sudo systemctl is-active crio": exit status 1 (160.306057ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-558000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-558000
docker.io/kicbase/echo-server:functional-558000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-558000 image ls --format short --alsologtostderr:
I0805 16:02:18.336357    3367 out.go:291] Setting OutFile to fd 1 ...
I0805 16:02:18.336619    3367 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 16:02:18.336625    3367 out.go:304] Setting ErrFile to fd 2...
I0805 16:02:18.336628    3367 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 16:02:18.336795    3367 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
I0805 16:02:18.337396    3367 config.go:182] Loaded profile config "functional-558000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 16:02:18.337489    3367 config.go:182] Loaded profile config "functional-558000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 16:02:18.337829    3367 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0805 16:02:18.337886    3367 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0805 16:02:18.346040    3367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50987
I0805 16:02:18.346449    3367 main.go:141] libmachine: () Calling .GetVersion
I0805 16:02:18.346859    3367 main.go:141] libmachine: Using API Version  1
I0805 16:02:18.346891    3367 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 16:02:18.347104    3367 main.go:141] libmachine: () Calling .GetMachineName
I0805 16:02:18.347223    3367 main.go:141] libmachine: (functional-558000) Calling .GetState
I0805 16:02:18.347313    3367 main.go:141] libmachine: (functional-558000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0805 16:02:18.347384    3367 main.go:141] libmachine: (functional-558000) DBG | hyperkit pid from json: 2306
I0805 16:02:18.348672    3367 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0805 16:02:18.348696    3367 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0805 16:02:18.356875    3367 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50989
I0805 16:02:18.357206    3367 main.go:141] libmachine: () Calling .GetVersion
I0805 16:02:18.357533    3367 main.go:141] libmachine: Using API Version  1
I0805 16:02:18.357542    3367 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 16:02:18.357754    3367 main.go:141] libmachine: () Calling .GetMachineName
I0805 16:02:18.357871    3367 main.go:141] libmachine: (functional-558000) Calling .DriverName
I0805 16:02:18.358028    3367 ssh_runner.go:195] Run: systemctl --version
I0805 16:02:18.358045    3367 main.go:141] libmachine: (functional-558000) Calling .GetSSHHostname
I0805 16:02:18.358113    3367 main.go:141] libmachine: (functional-558000) Calling .GetSSHPort
I0805 16:02:18.358197    3367 main.go:141] libmachine: (functional-558000) Calling .GetSSHKeyPath
I0805 16:02:18.358281    3367 main.go:141] libmachine: (functional-558000) Calling .GetSSHUsername
I0805 16:02:18.358363    3367 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/functional-558000/id_rsa Username:docker}
I0805 16:02:18.391354    3367 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0805 16:02:18.408654    3367 main.go:141] libmachine: Making call to close driver server
I0805 16:02:18.408662    3367 main.go:141] libmachine: (functional-558000) Calling .Close
I0805 16:02:18.408808    3367 main.go:141] libmachine: Successfully made call to close driver server
I0805 16:02:18.408819    3367 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 16:02:18.408826    3367 main.go:141] libmachine: Making call to close driver server
I0805 16:02:18.408839    3367 main.go:141] libmachine: (functional-558000) DBG | Closing plugin on server side
I0805 16:02:18.408856    3367 main.go:141] libmachine: (functional-558000) Calling .Close
I0805 16:02:18.408984    3367 main.go:141] libmachine: Successfully made call to close driver server
I0805 16:02:18.408995    3367 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 16:02:18.409001    3367 main.go:141] libmachine: (functional-558000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-558000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 76932a3b37d7e | 111MB  |
| docker.io/library/nginx                     | latest            | a72860cb95fd5 | 188MB  |
| docker.io/library/nginx                     | alpine            | 1ae23480369fa | 43.2MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/minikube-local-cache-test | functional-558000 | 8708212435d7f | 30B    |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 1f6d574d502f3 | 117MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kicbase/echo-server               | functional-558000 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-scheduler              | v1.30.3           | 3edc18e7b7672 | 62MB   |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 55bb025d2cfa5 | 84.7MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-558000 image ls --format table --alsologtostderr:
I0805 16:02:18.645668    3375 out.go:291] Setting OutFile to fd 1 ...
I0805 16:02:18.645953    3375 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 16:02:18.645959    3375 out.go:304] Setting ErrFile to fd 2...
I0805 16:02:18.645963    3375 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 16:02:18.646139    3375 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
I0805 16:02:18.646754    3375 config.go:182] Loaded profile config "functional-558000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 16:02:18.646844    3375 config.go:182] Loaded profile config "functional-558000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 16:02:18.647180    3375 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0805 16:02:18.647227    3375 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0805 16:02:18.656975    3375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50997
I0805 16:02:18.657383    3375 main.go:141] libmachine: () Calling .GetVersion
I0805 16:02:18.657795    3375 main.go:141] libmachine: Using API Version  1
I0805 16:02:18.657826    3375 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 16:02:18.658054    3375 main.go:141] libmachine: () Calling .GetMachineName
I0805 16:02:18.658173    3375 main.go:141] libmachine: (functional-558000) Calling .GetState
I0805 16:02:18.658260    3375 main.go:141] libmachine: (functional-558000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0805 16:02:18.658332    3375 main.go:141] libmachine: (functional-558000) DBG | hyperkit pid from json: 2306
I0805 16:02:18.659626    3375 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0805 16:02:18.659650    3375 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0805 16:02:18.668273    3375 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50999
I0805 16:02:18.668641    3375 main.go:141] libmachine: () Calling .GetVersion
I0805 16:02:18.669043    3375 main.go:141] libmachine: Using API Version  1
I0805 16:02:18.669065    3375 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 16:02:18.669287    3375 main.go:141] libmachine: () Calling .GetMachineName
I0805 16:02:18.669395    3375 main.go:141] libmachine: (functional-558000) Calling .DriverName
I0805 16:02:18.669560    3375 ssh_runner.go:195] Run: systemctl --version
I0805 16:02:18.669581    3375 main.go:141] libmachine: (functional-558000) Calling .GetSSHHostname
I0805 16:02:18.669671    3375 main.go:141] libmachine: (functional-558000) Calling .GetSSHPort
I0805 16:02:18.669756    3375 main.go:141] libmachine: (functional-558000) Calling .GetSSHKeyPath
I0805 16:02:18.669846    3375 main.go:141] libmachine: (functional-558000) Calling .GetSSHUsername
I0805 16:02:18.669928    3375 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/functional-558000/id_rsa Username:docker}
I0805 16:02:18.704939    3375 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0805 16:02:18.731637    3375 main.go:141] libmachine: Making call to close driver server
I0805 16:02:18.731647    3375 main.go:141] libmachine: (functional-558000) Calling .Close
I0805 16:02:18.731795    3375 main.go:141] libmachine: Successfully made call to close driver server
I0805 16:02:18.731801    3375 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 16:02:18.731809    3375 main.go:141] libmachine: Making call to close driver server
I0805 16:02:18.731813    3375 main.go:141] libmachine: (functional-558000) Calling .Close
I0805 16:02:18.732011    3375 main.go:141] libmachine: Successfully made call to close driver server
I0805 16:02:18.732021    3375 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 16:02:18.732013    3375 main.go:141] libmachine: (functional-558000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-558000 image ls --format json --alsologtostderr:
[{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","
repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"111000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-558000"],"size":"4940000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"62000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3
.1"],"size":"742000"},{"id":"1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117000000"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"84700000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"8708212435d7fe712d5da853c42810be7d1e9b4456369aa98c0f6f88435f5b63","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-558000"],"size":"30"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-558000 image ls --format json --alsologtostderr:
I0805 16:02:18.487811    3371 out.go:291] Setting OutFile to fd 1 ...
I0805 16:02:18.488003    3371 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 16:02:18.488007    3371 out.go:304] Setting ErrFile to fd 2...
I0805 16:02:18.488011    3371 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 16:02:18.488200    3371 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
I0805 16:02:18.488892    3371 config.go:182] Loaded profile config "functional-558000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 16:02:18.488988    3371 config.go:182] Loaded profile config "functional-558000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 16:02:18.489353    3371 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0805 16:02:18.489401    3371 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0805 16:02:18.497505    3371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50992
I0805 16:02:18.497945    3371 main.go:141] libmachine: () Calling .GetVersion
I0805 16:02:18.498391    3371 main.go:141] libmachine: Using API Version  1
I0805 16:02:18.498401    3371 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 16:02:18.498607    3371 main.go:141] libmachine: () Calling .GetMachineName
I0805 16:02:18.498766    3371 main.go:141] libmachine: (functional-558000) Calling .GetState
I0805 16:02:18.498856    3371 main.go:141] libmachine: (functional-558000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0805 16:02:18.498939    3371 main.go:141] libmachine: (functional-558000) DBG | hyperkit pid from json: 2306
I0805 16:02:18.500226    3371 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0805 16:02:18.500249    3371 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0805 16:02:18.508404    3371 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50994
I0805 16:02:18.508756    3371 main.go:141] libmachine: () Calling .GetVersion
I0805 16:02:18.509085    3371 main.go:141] libmachine: Using API Version  1
I0805 16:02:18.509094    3371 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 16:02:18.509271    3371 main.go:141] libmachine: () Calling .GetMachineName
I0805 16:02:18.509378    3371 main.go:141] libmachine: (functional-558000) Calling .DriverName
I0805 16:02:18.509525    3371 ssh_runner.go:195] Run: systemctl --version
I0805 16:02:18.509546    3371 main.go:141] libmachine: (functional-558000) Calling .GetSSHHostname
I0805 16:02:18.509617    3371 main.go:141] libmachine: (functional-558000) Calling .GetSSHPort
I0805 16:02:18.509715    3371 main.go:141] libmachine: (functional-558000) Calling .GetSSHKeyPath
I0805 16:02:18.509789    3371 main.go:141] libmachine: (functional-558000) Calling .GetSSHUsername
I0805 16:02:18.509871    3371 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/functional-558000/id_rsa Username:docker}
I0805 16:02:18.545026    3371 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0805 16:02:18.565864    3371 main.go:141] libmachine: Making call to close driver server
I0805 16:02:18.565872    3371 main.go:141] libmachine: (functional-558000) Calling .Close
I0805 16:02:18.566025    3371 main.go:141] libmachine: Successfully made call to close driver server
I0805 16:02:18.566034    3371 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 16:02:18.566040    3371 main.go:141] libmachine: Making call to close driver server
I0805 16:02:18.566064    3371 main.go:141] libmachine: (functional-558000) DBG | Closing plugin on server side
I0805 16:02:18.566100    3371 main.go:141] libmachine: (functional-558000) Calling .Close
I0805 16:02:18.566249    3371 main.go:141] libmachine: Successfully made call to close driver server
I0805 16:02:18.566269    3371 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 16:02:18.566283    3371 main.go:141] libmachine: (functional-558000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image ls --format yaml --alsologtostderr
E0805 16:02:18.237415    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-558000 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "111000000"
- id: a72860cb95fd59e9c696c66441c64f18e66915fa26b249911e83c3854477ed9a
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-558000
size: "4940000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "62000000"
- id: 1ae23480369fa4139f6dec668d7a5a941b56ea174e9cf75e09771988fe621c95
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 8708212435d7fe712d5da853c42810be7d1e9b4456369aa98c0f6f88435f5b63
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-558000
size: "30"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117000000"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "84700000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-558000 image ls --format yaml --alsologtostderr:
I0805 16:02:18.188521    3363 out.go:291] Setting OutFile to fd 1 ...
I0805 16:02:18.188791    3363 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 16:02:18.188796    3363 out.go:304] Setting ErrFile to fd 2...
I0805 16:02:18.188800    3363 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 16:02:18.188960    3363 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
I0805 16:02:18.189541    3363 config.go:182] Loaded profile config "functional-558000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 16:02:18.189634    3363 config.go:182] Loaded profile config "functional-558000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 16:02:18.189959    3363 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0805 16:02:18.190002    3363 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0805 16:02:18.198323    3363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50981
I0805 16:02:18.198737    3363 main.go:141] libmachine: () Calling .GetVersion
I0805 16:02:18.199147    3363 main.go:141] libmachine: Using API Version  1
I0805 16:02:18.199161    3363 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 16:02:18.199400    3363 main.go:141] libmachine: () Calling .GetMachineName
I0805 16:02:18.199515    3363 main.go:141] libmachine: (functional-558000) Calling .GetState
I0805 16:02:18.199588    3363 main.go:141] libmachine: (functional-558000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0805 16:02:18.199659    3363 main.go:141] libmachine: (functional-558000) DBG | hyperkit pid from json: 2306
I0805 16:02:18.200980    3363 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0805 16:02:18.201003    3363 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0805 16:02:18.209633    3363 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50983
I0805 16:02:18.209967    3363 main.go:141] libmachine: () Calling .GetVersion
I0805 16:02:18.210336    3363 main.go:141] libmachine: Using API Version  1
I0805 16:02:18.210358    3363 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 16:02:18.210560    3363 main.go:141] libmachine: () Calling .GetMachineName
I0805 16:02:18.210665    3363 main.go:141] libmachine: (functional-558000) Calling .DriverName
I0805 16:02:18.210818    3363 ssh_runner.go:195] Run: systemctl --version
I0805 16:02:18.210845    3363 main.go:141] libmachine: (functional-558000) Calling .GetSSHHostname
I0805 16:02:18.210920    3363 main.go:141] libmachine: (functional-558000) Calling .GetSSHPort
I0805 16:02:18.210997    3363 main.go:141] libmachine: (functional-558000) Calling .GetSSHKeyPath
I0805 16:02:18.211074    3363 main.go:141] libmachine: (functional-558000) Calling .GetSSHUsername
I0805 16:02:18.211148    3363 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/functional-558000/id_rsa Username:docker}
I0805 16:02:18.240587    3363 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0805 16:02:18.258294    3363 main.go:141] libmachine: Making call to close driver server
I0805 16:02:18.258303    3363 main.go:141] libmachine: (functional-558000) Calling .Close
I0805 16:02:18.258456    3363 main.go:141] libmachine: (functional-558000) DBG | Closing plugin on server side
I0805 16:02:18.258459    3363 main.go:141] libmachine: Successfully made call to close driver server
I0805 16:02:18.258470    3363 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 16:02:18.258477    3363 main.go:141] libmachine: Making call to close driver server
I0805 16:02:18.258482    3363 main.go:141] libmachine: (functional-558000) Calling .Close
I0805 16:02:18.258620    3363 main.go:141] libmachine: Successfully made call to close driver server
I0805 16:02:18.258624    3363 main.go:141] libmachine: (functional-558000) DBG | Closing plugin on server side
I0805 16:02:18.258632    3363 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-558000 ssh pgrep buildkitd: exit status 1 (130.861708ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image build -t localhost/my-image:functional-558000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-558000 image build -t localhost/my-image:functional-558000 testdata/build --alsologtostderr: (2.283682392s)
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-558000 image build -t localhost/my-image:functional-558000 testdata/build --alsologtostderr:
I0805 16:02:18.942237    3384 out.go:291] Setting OutFile to fd 1 ...
I0805 16:02:18.942612    3384 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 16:02:18.942618    3384 out.go:304] Setting ErrFile to fd 2...
I0805 16:02:18.942622    3384 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0805 16:02:18.942806    3384 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
I0805 16:02:18.943421    3384 config.go:182] Loaded profile config "functional-558000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 16:02:18.944120    3384 config.go:182] Loaded profile config "functional-558000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0805 16:02:18.944472    3384 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0805 16:02:18.944514    3384 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0805 16:02:18.953105    3384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51010
I0805 16:02:18.953506    3384 main.go:141] libmachine: () Calling .GetVersion
I0805 16:02:18.953922    3384 main.go:141] libmachine: Using API Version  1
I0805 16:02:18.953936    3384 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 16:02:18.954203    3384 main.go:141] libmachine: () Calling .GetMachineName
I0805 16:02:18.954359    3384 main.go:141] libmachine: (functional-558000) Calling .GetState
I0805 16:02:18.954454    3384 main.go:141] libmachine: (functional-558000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0805 16:02:18.954519    3384 main.go:141] libmachine: (functional-558000) DBG | hyperkit pid from json: 2306
I0805 16:02:18.955828    3384 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0805 16:02:18.955849    3384 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0805 16:02:18.964503    3384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51012
I0805 16:02:18.964853    3384 main.go:141] libmachine: () Calling .GetVersion
I0805 16:02:18.965267    3384 main.go:141] libmachine: Using API Version  1
I0805 16:02:18.965290    3384 main.go:141] libmachine: () Calling .SetConfigRaw
I0805 16:02:18.965492    3384 main.go:141] libmachine: () Calling .GetMachineName
I0805 16:02:18.965584    3384 main.go:141] libmachine: (functional-558000) Calling .DriverName
I0805 16:02:18.965742    3384 ssh_runner.go:195] Run: systemctl --version
I0805 16:02:18.965763    3384 main.go:141] libmachine: (functional-558000) Calling .GetSSHHostname
I0805 16:02:18.965834    3384 main.go:141] libmachine: (functional-558000) Calling .GetSSHPort
I0805 16:02:18.965908    3384 main.go:141] libmachine: (functional-558000) Calling .GetSSHKeyPath
I0805 16:02:18.966033    3384 main.go:141] libmachine: (functional-558000) Calling .GetSSHUsername
I0805 16:02:18.966105    3384 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/functional-558000/id_rsa Username:docker}
I0805 16:02:18.997224    3384 build_images.go:161] Building image from path: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.3561089832.tar
I0805 16:02:18.997293    3384 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0805 16:02:19.007822    3384 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3561089832.tar
I0805 16:02:19.012574    3384 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3561089832.tar: stat -c "%s %y" /var/lib/minikube/build/build.3561089832.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3561089832.tar': No such file or directory
I0805 16:02:19.012606    3384 ssh_runner.go:362] scp /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.3561089832.tar --> /var/lib/minikube/build/build.3561089832.tar (3072 bytes)
I0805 16:02:19.036884    3384 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3561089832
I0805 16:02:19.059066    3384 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3561089832 -xf /var/lib/minikube/build/build.3561089832.tar
I0805 16:02:19.072580    3384 docker.go:360] Building image: /var/lib/minikube/build/build.3561089832
I0805 16:02:19.072649    3384 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-558000 /var/lib/minikube/build/build.3561089832
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.0s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:8d084b8dc6ffd8655a467be8e31d0bbeef9a95689cd5c34ab9fe83554922c3a2 done
#8 naming to localhost/my-image:functional-558000 done
#8 DONE 0.0s
I0805 16:02:21.123821    3384 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-558000 /var/lib/minikube/build/build.3561089832: (2.051157064s)
I0805 16:02:21.123888    3384 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3561089832
I0805 16:02:21.132549    3384 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3561089832.tar
I0805 16:02:21.140327    3384 build_images.go:217] Built localhost/my-image:functional-558000 from /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.3561089832.tar
I0805 16:02:21.140352    3384 build_images.go:133] succeeded building to: functional-558000
I0805 16:02:21.140357    3384 build_images.go:134] failed building to: 
I0805 16:02:21.140376    3384 main.go:141] libmachine: Making call to close driver server
I0805 16:02:21.140382    3384 main.go:141] libmachine: (functional-558000) Calling .Close
I0805 16:02:21.140547    3384 main.go:141] libmachine: Successfully made call to close driver server
I0805 16:02:21.140555    3384 main.go:141] libmachine: Making call to close connection to plugin binary
I0805 16:02:21.140562    3384 main.go:141] libmachine: Making call to close driver server
I0805 16:02:21.140570    3384 main.go:141] libmachine: (functional-558000) Calling .Close
I0805 16:02:21.140739    3384 main.go:141] libmachine: (functional-558000) DBG | Closing plugin on server side
I0805 16:02:21.140752    3384 main.go:141] libmachine: Successfully made call to close driver server
I0805 16:02:21.140764    3384 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image ls
2024/08/05 16:02:28 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.806409712s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-558000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-558000 docker-env) && out/minikube-darwin-amd64 status -p functional-558000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-558000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image load --daemon docker.io/kicbase/echo-server:functional-558000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image load --daemon docker.io/kicbase/echo-server:functional-558000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-558000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image load --daemon docker.io/kicbase/echo-server:functional-558000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image save docker.io/kicbase/echo-server:functional-558000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image rm docker.io/kicbase/echo-server:functional-558000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-558000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 image save --daemon docker.io/kicbase/echo-server:functional-558000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-558000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (22.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-558000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-558000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-9cxlg" [e132ed97-2905-42bf-96f9-30496fcf65a1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-9cxlg" [e132ed97-2905-42bf-96f9-30496fcf65a1] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 22.003087763s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (22.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-558000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-558000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-558000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3058: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-558000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-558000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-558000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [51333b8b-0def-4261-b6ac-d54ebca45ce4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [51333b8b-0def-4261-b6ac-d54ebca45ce4] Running
E0805 16:01:50.544261    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.003613515s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 service list -o json
functional_test.go:1490: Took "368.893335ms" to run "out/minikube-darwin-amd64 -p functional-558000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.169.0.4:31704
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.169.0.4:31704
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-558000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.96.201.84 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-558000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "178.128131ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "77.745912ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "192.03553ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "78.216847ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-558000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3846961856/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722898926906178000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3846961856/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722898926906178000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3846961856/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722898926906178000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3846961856/001/test-1722898926906178000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-558000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (153.075068ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug  5 23:02 created-by-test
-rw-r--r-- 1 docker docker 24 Aug  5 23:02 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug  5 23:02 test-1722898926906178000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh cat /mount-9p/test-1722898926906178000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-558000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [0e4c3ea9-160f-4181-915c-2c324b8b6341] Pending
helpers_test.go:344: "busybox-mount" [0e4c3ea9-160f-4181-915c-2c324b8b6341] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [0e4c3ea9-160f-4181-915c-2c324b8b6341] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [0e4c3ea9-160f-4181-915c-2c324b8b6341] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004014864s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-558000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-558000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3846961856/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-558000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port796618061/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-558000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (154.52443ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-558000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port796618061/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-558000 ssh "sudo umount -f /mount-9p": exit status 1 (133.136279ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-558000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-558000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port796618061/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-558000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1460291316/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-558000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1460291316/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-558000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1460291316/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-558000 ssh "findmnt -T" /mount1: exit status 1 (238.602128ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-558000 ssh "findmnt -T" /mount1: exit status 1 (178.270759ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-558000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-558000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-558000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1460291316/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-558000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1460291316/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-558000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1460291316/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.24s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-558000
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-558000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-558000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (216.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-968000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit 
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-968000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit : (3m36.012714544s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (216.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-968000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-968000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-968000 -- rollout status deployment/busybox: (3.915313554s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-968000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-968000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-968000 -- exec busybox-fc5497c4f-k62jp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-968000 -- exec busybox-fc5497c4f-pxn97 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-968000 -- exec busybox-fc5497c4f-rmn5x -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-968000 -- exec busybox-fc5497c4f-k62jp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-968000 -- exec busybox-fc5497c4f-pxn97 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-968000 -- exec busybox-fc5497c4f-rmn5x -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-968000 -- exec busybox-fc5497c4f-k62jp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-968000 -- exec busybox-fc5497c4f-pxn97 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-968000 -- exec busybox-fc5497c4f-rmn5x -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-968000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-968000 -- exec busybox-fc5497c4f-k62jp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-968000 -- exec busybox-fc5497c4f-k62jp -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-968000 -- exec busybox-fc5497c4f-pxn97 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-968000 -- exec busybox-fc5497c4f-pxn97 -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-968000 -- exec busybox-fc5497c4f-rmn5x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-968000 -- exec busybox-fc5497c4f-rmn5x -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (49.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-968000 -v=7 --alsologtostderr
E0805 16:06:19.143130    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0805 16:06:19.148494    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0805 16:06:19.158847    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0805 16:06:19.180803    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0805 16:06:19.221816    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0805 16:06:19.303325    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0805 16:06:19.464983    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0805 16:06:19.785603    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0805 16:06:20.425766    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0805 16:06:21.707272    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0805 16:06:24.268206    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0805 16:06:29.389607    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0805 16:06:39.630868    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0805 16:06:50.545510    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
E0805 16:07:00.112446    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-968000 -v=7 --alsologtostderr: (49.229950877s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (49.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-968000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (8.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 cp testdata/cp-test.txt ha-968000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 ssh -n ha-968000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 cp ha-968000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1635686668/001/cp-test_ha-968000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 ssh -n ha-968000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 cp ha-968000:/home/docker/cp-test.txt ha-968000-m02:/home/docker/cp-test_ha-968000_ha-968000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 ssh -n ha-968000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 ssh -n ha-968000-m02 "sudo cat /home/docker/cp-test_ha-968000_ha-968000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 cp ha-968000:/home/docker/cp-test.txt ha-968000-m03:/home/docker/cp-test_ha-968000_ha-968000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 ssh -n ha-968000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 ssh -n ha-968000-m03 "sudo cat /home/docker/cp-test_ha-968000_ha-968000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 cp ha-968000:/home/docker/cp-test.txt ha-968000-m04:/home/docker/cp-test_ha-968000_ha-968000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 ssh -n ha-968000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 ssh -n ha-968000-m04 "sudo cat /home/docker/cp-test_ha-968000_ha-968000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 cp testdata/cp-test.txt ha-968000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 ssh -n ha-968000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 cp ha-968000-m02:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1635686668/001/cp-test_ha-968000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 ssh -n ha-968000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 cp ha-968000-m02:/home/docker/cp-test.txt ha-968000:/home/docker/cp-test_ha-968000-m02_ha-968000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 ssh -n ha-968000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 ssh -n ha-968000 "sudo cat /home/docker/cp-test_ha-968000-m02_ha-968000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 cp ha-968000-m02:/home/docker/cp-test.txt ha-968000-m03:/home/docker/cp-test_ha-968000-m02_ha-968000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 ssh -n ha-968000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 ssh -n ha-968000-m03 "sudo cat /home/docker/cp-test_ha-968000-m02_ha-968000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 cp ha-968000-m02:/home/docker/cp-test.txt ha-968000-m04:/home/docker/cp-test_ha-968000-m02_ha-968000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 ssh -n ha-968000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 ssh -n ha-968000-m04 "sudo cat /home/docker/cp-test_ha-968000-m02_ha-968000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 cp testdata/cp-test.txt ha-968000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 ssh -n ha-968000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 cp ha-968000-m03:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1635686668/001/cp-test_ha-968000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 ssh -n ha-968000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 cp ha-968000-m03:/home/docker/cp-test.txt ha-968000:/home/docker/cp-test_ha-968000-m03_ha-968000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 ssh -n ha-968000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 ssh -n ha-968000 "sudo cat /home/docker/cp-test_ha-968000-m03_ha-968000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 cp ha-968000-m03:/home/docker/cp-test.txt ha-968000-m02:/home/docker/cp-test_ha-968000-m03_ha-968000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 ssh -n ha-968000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 ssh -n ha-968000-m02 "sudo cat /home/docker/cp-test_ha-968000-m03_ha-968000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 cp ha-968000-m03:/home/docker/cp-test.txt ha-968000-m04:/home/docker/cp-test_ha-968000-m03_ha-968000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 ssh -n ha-968000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 ssh -n ha-968000-m04 "sudo cat /home/docker/cp-test_ha-968000-m03_ha-968000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 cp testdata/cp-test.txt ha-968000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 ssh -n ha-968000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 cp ha-968000-m04:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile1635686668/001/cp-test_ha-968000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 ssh -n ha-968000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 cp ha-968000-m04:/home/docker/cp-test.txt ha-968000:/home/docker/cp-test_ha-968000-m04_ha-968000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 ssh -n ha-968000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 ssh -n ha-968000 "sudo cat /home/docker/cp-test_ha-968000-m04_ha-968000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 cp ha-968000-m04:/home/docker/cp-test.txt ha-968000-m02:/home/docker/cp-test_ha-968000-m04_ha-968000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 ssh -n ha-968000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 ssh -n ha-968000-m02 "sudo cat /home/docker/cp-test_ha-968000-m04_ha-968000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 cp ha-968000-m04:/home/docker/cp-test.txt ha-968000-m03:/home/docker/cp-test_ha-968000-m04_ha-968000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 ssh -n ha-968000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 ssh -n ha-968000-m03 "sudo cat /home/docker/cp-test_ha-968000-m04_ha-968000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (8.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (8.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-968000 node stop m02 -v=7 --alsologtostderr: (8.352061519s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-968000 status -v=7 --alsologtostderr: exit status 7 (339.148797ms)

                                                
                                                
-- stdout --
	ha-968000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-968000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-968000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-968000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:07:25.263405    3931 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:07:25.263703    3931 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:07:25.263709    3931 out.go:304] Setting ErrFile to fd 2...
	I0805 16:07:25.263713    3931 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:07:25.263895    3931 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:07:25.264094    3931 out.go:298] Setting JSON to false
	I0805 16:07:25.264116    3931 mustload.go:65] Loading cluster: ha-968000
	I0805 16:07:25.264162    3931 notify.go:220] Checking for updates...
	I0805 16:07:25.264422    3931 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:07:25.264440    3931 status.go:255] checking status of ha-968000 ...
	I0805 16:07:25.264806    3931 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:07:25.264854    3931 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:07:25.273746    3931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51748
	I0805 16:07:25.274104    3931 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:07:25.274514    3931 main.go:141] libmachine: Using API Version  1
	I0805 16:07:25.274526    3931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:07:25.274723    3931 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:07:25.274875    3931 main.go:141] libmachine: (ha-968000) Calling .GetState
	I0805 16:07:25.274972    3931 main.go:141] libmachine: (ha-968000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:07:25.275054    3931 main.go:141] libmachine: (ha-968000) DBG | hyperkit pid from json: 3418
	I0805 16:07:25.276047    3931 status.go:330] ha-968000 host status = "Running" (err=<nil>)
	I0805 16:07:25.276067    3931 host.go:66] Checking if "ha-968000" exists ...
	I0805 16:07:25.276321    3931 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:07:25.276340    3931 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:07:25.284775    3931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51750
	I0805 16:07:25.285125    3931 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:07:25.285463    3931 main.go:141] libmachine: Using API Version  1
	I0805 16:07:25.285481    3931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:07:25.285717    3931 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:07:25.285830    3931 main.go:141] libmachine: (ha-968000) Calling .GetIP
	I0805 16:07:25.285914    3931 host.go:66] Checking if "ha-968000" exists ...
	I0805 16:07:25.286184    3931 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:07:25.286209    3931 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:07:25.295910    3931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51752
	I0805 16:07:25.296282    3931 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:07:25.296601    3931 main.go:141] libmachine: Using API Version  1
	I0805 16:07:25.296612    3931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:07:25.296861    3931 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:07:25.296990    3931 main.go:141] libmachine: (ha-968000) Calling .DriverName
	I0805 16:07:25.297142    3931 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:07:25.297164    3931 main.go:141] libmachine: (ha-968000) Calling .GetSSHHostname
	I0805 16:07:25.297234    3931 main.go:141] libmachine: (ha-968000) Calling .GetSSHPort
	I0805 16:07:25.297313    3931 main.go:141] libmachine: (ha-968000) Calling .GetSSHKeyPath
	I0805 16:07:25.297398    3931 main.go:141] libmachine: (ha-968000) Calling .GetSSHUsername
	I0805 16:07:25.297478    3931 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000/id_rsa Username:docker}
	I0805 16:07:25.327189    3931 ssh_runner.go:195] Run: systemctl --version
	I0805 16:07:25.331445    3931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:07:25.342717    3931 kubeconfig.go:125] found "ha-968000" server: "https://192.169.0.254:8443"
	I0805 16:07:25.342745    3931 api_server.go:166] Checking apiserver status ...
	I0805 16:07:25.342785    3931 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:07:25.353646    3931 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2103/cgroup
	W0805 16:07:25.361177    3931 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2103/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 16:07:25.361244    3931 ssh_runner.go:195] Run: ls
	I0805 16:07:25.364528    3931 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0805 16:07:25.368938    3931 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0805 16:07:25.368954    3931 status.go:422] ha-968000 apiserver status = Running (err=<nil>)
	I0805 16:07:25.368971    3931 status.go:257] ha-968000 status: &{Name:ha-968000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:07:25.368983    3931 status.go:255] checking status of ha-968000-m02 ...
	I0805 16:07:25.369243    3931 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:07:25.369265    3931 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:07:25.377766    3931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51756
	I0805 16:07:25.378123    3931 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:07:25.378471    3931 main.go:141] libmachine: Using API Version  1
	I0805 16:07:25.378489    3931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:07:25.378709    3931 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:07:25.378832    3931 main.go:141] libmachine: (ha-968000-m02) Calling .GetState
	I0805 16:07:25.378920    3931 main.go:141] libmachine: (ha-968000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:07:25.378993    3931 main.go:141] libmachine: (ha-968000-m02) DBG | hyperkit pid from json: 3440
	I0805 16:07:25.379979    3931 main.go:141] libmachine: (ha-968000-m02) DBG | hyperkit pid 3440 missing from process table
	I0805 16:07:25.379997    3931 status.go:330] ha-968000-m02 host status = "Stopped" (err=<nil>)
	I0805 16:07:25.380002    3931 status.go:343] host is not running, skipping remaining checks
	I0805 16:07:25.380010    3931 status.go:257] ha-968000-m02 status: &{Name:ha-968000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:07:25.380022    3931 status.go:255] checking status of ha-968000-m03 ...
	I0805 16:07:25.380293    3931 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:07:25.380315    3931 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:07:25.388536    3931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51758
	I0805 16:07:25.388863    3931 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:07:25.389215    3931 main.go:141] libmachine: Using API Version  1
	I0805 16:07:25.389230    3931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:07:25.389452    3931 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:07:25.389573    3931 main.go:141] libmachine: (ha-968000-m03) Calling .GetState
	I0805 16:07:25.389654    3931 main.go:141] libmachine: (ha-968000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:07:25.389739    3931 main.go:141] libmachine: (ha-968000-m03) DBG | hyperkit pid from json: 3471
	I0805 16:07:25.390744    3931 status.go:330] ha-968000-m03 host status = "Running" (err=<nil>)
	I0805 16:07:25.390751    3931 host.go:66] Checking if "ha-968000-m03" exists ...
	I0805 16:07:25.391007    3931 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:07:25.391031    3931 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:07:25.399278    3931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51760
	I0805 16:07:25.399626    3931 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:07:25.399928    3931 main.go:141] libmachine: Using API Version  1
	I0805 16:07:25.399935    3931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:07:25.400157    3931 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:07:25.400279    3931 main.go:141] libmachine: (ha-968000-m03) Calling .GetIP
	I0805 16:07:25.400381    3931 host.go:66] Checking if "ha-968000-m03" exists ...
	I0805 16:07:25.400632    3931 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:07:25.400664    3931 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:07:25.408916    3931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51762
	I0805 16:07:25.409273    3931 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:07:25.409613    3931 main.go:141] libmachine: Using API Version  1
	I0805 16:07:25.409624    3931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:07:25.409823    3931 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:07:25.409914    3931 main.go:141] libmachine: (ha-968000-m03) Calling .DriverName
	I0805 16:07:25.410055    3931 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:07:25.410077    3931 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHHostname
	I0805 16:07:25.410153    3931 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHPort
	I0805 16:07:25.410224    3931 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHKeyPath
	I0805 16:07:25.410336    3931 main.go:141] libmachine: (ha-968000-m03) Calling .GetSSHUsername
	I0805 16:07:25.410423    3931 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m03/id_rsa Username:docker}
	I0805 16:07:25.439701    3931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:07:25.451333    3931 kubeconfig.go:125] found "ha-968000" server: "https://192.169.0.254:8443"
	I0805 16:07:25.451348    3931 api_server.go:166] Checking apiserver status ...
	I0805 16:07:25.451384    3931 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0805 16:07:25.462752    3931 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2018/cgroup
	W0805 16:07:25.469836    3931 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2018/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0805 16:07:25.469889    3931 ssh_runner.go:195] Run: ls
	I0805 16:07:25.473585    3931 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0805 16:07:25.476724    3931 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0805 16:07:25.476742    3931 status.go:422] ha-968000-m03 apiserver status = Running (err=<nil>)
	I0805 16:07:25.476751    3931 status.go:257] ha-968000-m03 status: &{Name:ha-968000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:07:25.476762    3931 status.go:255] checking status of ha-968000-m04 ...
	I0805 16:07:25.477020    3931 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:07:25.477040    3931 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:07:25.485426    3931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51766
	I0805 16:07:25.485789    3931 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:07:25.486145    3931 main.go:141] libmachine: Using API Version  1
	I0805 16:07:25.486161    3931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:07:25.486392    3931 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:07:25.486518    3931 main.go:141] libmachine: (ha-968000-m04) Calling .GetState
	I0805 16:07:25.486601    3931 main.go:141] libmachine: (ha-968000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:07:25.486687    3931 main.go:141] libmachine: (ha-968000-m04) DBG | hyperkit pid from json: 3587
	I0805 16:07:25.487694    3931 status.go:330] ha-968000-m04 host status = "Running" (err=<nil>)
	I0805 16:07:25.487703    3931 host.go:66] Checking if "ha-968000-m04" exists ...
	I0805 16:07:25.487949    3931 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:07:25.487971    3931 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:07:25.496293    3931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51768
	I0805 16:07:25.496647    3931 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:07:25.496996    3931 main.go:141] libmachine: Using API Version  1
	I0805 16:07:25.497011    3931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:07:25.497192    3931 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:07:25.497299    3931 main.go:141] libmachine: (ha-968000-m04) Calling .GetIP
	I0805 16:07:25.497378    3931 host.go:66] Checking if "ha-968000-m04" exists ...
	I0805 16:07:25.497638    3931 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:07:25.497662    3931 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:07:25.505879    3931 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51770
	I0805 16:07:25.506241    3931 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:07:25.506586    3931 main.go:141] libmachine: Using API Version  1
	I0805 16:07:25.506604    3931 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:07:25.506822    3931 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:07:25.506934    3931 main.go:141] libmachine: (ha-968000-m04) Calling .DriverName
	I0805 16:07:25.507062    3931 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0805 16:07:25.507074    3931 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHHostname
	I0805 16:07:25.507149    3931 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHPort
	I0805 16:07:25.507237    3931 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHKeyPath
	I0805 16:07:25.507320    3931 main.go:141] libmachine: (ha-968000-m04) Calling .GetSSHUsername
	I0805 16:07:25.507402    3931 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19373-1122/.minikube/machines/ha-968000-m04/id_rsa Username:docker}
	I0805 16:07:25.535725    3931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0805 16:07:25.547159    3931 status.go:257] ha-968000-m04 status: &{Name:ha-968000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (8.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (42.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 node start m02 -v=7 --alsologtostderr
E0805 16:07:41.073329    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-968000 node start m02 -v=7 --alsologtostderr: (41.847629421s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (42.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (24.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-968000 stop -v=7 --alsologtostderr: (24.841919658s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-968000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-968000 status -v=7 --alsologtostderr: exit status 7 (95.530527ms)

                                                
                                                
-- stdout --
	ha-968000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-968000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-968000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:12:52.250390    4192 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:12:52.250668    4192 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:12:52.250673    4192 out.go:304] Setting ErrFile to fd 2...
	I0805 16:12:52.250677    4192 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:12:52.250841    4192 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:12:52.251019    4192 out.go:298] Setting JSON to false
	I0805 16:12:52.251042    4192 mustload.go:65] Loading cluster: ha-968000
	I0805 16:12:52.251080    4192 notify.go:220] Checking for updates...
	I0805 16:12:52.251330    4192 config.go:182] Loaded profile config "ha-968000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:12:52.251346    4192 status.go:255] checking status of ha-968000 ...
	I0805 16:12:52.251693    4192 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:12:52.251736    4192 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:12:52.260726    4192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52120
	I0805 16:12:52.261064    4192 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:12:52.261478    4192 main.go:141] libmachine: Using API Version  1
	I0805 16:12:52.261489    4192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:12:52.261695    4192 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:12:52.261795    4192 main.go:141] libmachine: (ha-968000) Calling .GetState
	I0805 16:12:52.261884    4192 main.go:141] libmachine: (ha-968000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:12:52.261945    4192 main.go:141] libmachine: (ha-968000) DBG | hyperkit pid from json: 4025
	I0805 16:12:52.262834    4192 main.go:141] libmachine: (ha-968000) DBG | hyperkit pid 4025 missing from process table
	I0805 16:12:52.262870    4192 status.go:330] ha-968000 host status = "Stopped" (err=<nil>)
	I0805 16:12:52.262878    4192 status.go:343] host is not running, skipping remaining checks
	I0805 16:12:52.262885    4192 status.go:257] ha-968000 status: &{Name:ha-968000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:12:52.262909    4192 status.go:255] checking status of ha-968000-m02 ...
	I0805 16:12:52.263147    4192 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:12:52.263168    4192 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:12:52.271332    4192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52122
	I0805 16:12:52.271669    4192 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:12:52.272032    4192 main.go:141] libmachine: Using API Version  1
	I0805 16:12:52.272053    4192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:12:52.272277    4192 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:12:52.272391    4192 main.go:141] libmachine: (ha-968000-m02) Calling .GetState
	I0805 16:12:52.272477    4192 main.go:141] libmachine: (ha-968000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:12:52.272546    4192 main.go:141] libmachine: (ha-968000-m02) DBG | hyperkit pid from json: 4036
	I0805 16:12:52.273466    4192 main.go:141] libmachine: (ha-968000-m02) DBG | hyperkit pid 4036 missing from process table
	I0805 16:12:52.273483    4192 status.go:330] ha-968000-m02 host status = "Stopped" (err=<nil>)
	I0805 16:12:52.273494    4192 status.go:343] host is not running, skipping remaining checks
	I0805 16:12:52.273501    4192 status.go:257] ha-968000-m02 status: &{Name:ha-968000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:12:52.273514    4192 status.go:255] checking status of ha-968000-m04 ...
	I0805 16:12:52.273775    4192 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:12:52.273800    4192 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:12:52.287799    4192 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52124
	I0805 16:12:52.288168    4192 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:12:52.288523    4192 main.go:141] libmachine: Using API Version  1
	I0805 16:12:52.288534    4192 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:12:52.288740    4192 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:12:52.288917    4192 main.go:141] libmachine: (ha-968000-m04) Calling .GetState
	I0805 16:12:52.289029    4192 main.go:141] libmachine: (ha-968000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:12:52.289107    4192 main.go:141] libmachine: (ha-968000-m04) DBG | hyperkit pid from json: 4076
	I0805 16:12:52.290043    4192 status.go:330] ha-968000-m04 host status = "Stopped" (err=<nil>)
	I0805 16:12:52.290046    4192 main.go:141] libmachine: (ha-968000-m04) DBG | hyperkit pid 4076 missing from process table
	I0805 16:12:52.290052    4192 status.go:343] host is not running, skipping remaining checks
	I0805 16:12:52.290059    4192 status.go:257] ha-968000-m04 status: &{Name:ha-968000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (24.94s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (38.32s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-364000 --driver=hyperkit 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-364000 --driver=hyperkit : (38.321774868s)
--- PASS: TestImageBuild/serial/Setup (38.32s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.58s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-364000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-364000: (1.578575857s)
--- PASS: TestImageBuild/serial/NormalBuild (1.58s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.72s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-364000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.72s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.52s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-364000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.52s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.54s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-364000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.54s)

                                                
                                    
x
+
TestJSONOutput/start/Command (89.92s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-702000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
E0805 16:16:19.142350    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-702000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (1m29.916198354s)
--- PASS: TestJSONOutput/start/Command (89.92s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.48s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-702000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.48s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.47s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-702000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.47s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.33s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-702000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-702000 --output=json --user=testUser: (8.331313957s)
--- PASS: TestJSONOutput/stop/Command (8.33s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.57s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-623000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-623000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (358.76041ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a37daf3e-ec76-4c86-94b6-3ec66321cf26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-623000] minikube v1.33.1 on Darwin 14.5","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"236dbc25-3d83-40cc-9c89-97fa78ca7628","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19373"}}
	{"specversion":"1.0","id":"9d838a46-ba72-4127-955f-2b57c4d36ac2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig"}}
	{"specversion":"1.0","id":"c239fd2b-c7ec-47c0-9197-0ed7d2603350","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"7fb7761e-9ef0-4818-81d9-3c136c257236","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5013810e-8ae2-4473-95af-f3e472dca82e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube"}}
	{"specversion":"1.0","id":"1fe8bcfb-644b-417d-99a9-fd315a72d9e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ead3bfa7-95f0-46ba-8e05-e34f4eb9a284","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-623000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-623000
--- PASS: TestErrorJSONOutput (0.57s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (87.03s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-742000 --driver=hyperkit 
E0805 16:16:50.545571    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-742000 --driver=hyperkit : (38.690183189s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-744000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-744000 --driver=hyperkit : (38.901646651s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-742000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-744000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-744000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-744000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-744000: (3.436724384s)
helpers_test.go:175: Cleaning up "first-742000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-742000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-742000: (5.233651969s)
--- PASS: TestMinikubeProfile (87.03s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-985000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.17s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-amd64 -p multinode-985000 stop: (16.614332712s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-985000 status: exit status 7 (76.269164ms)

                                                
                                                
-- stdout --
	multinode-985000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-985000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-985000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-985000 status --alsologtostderr: exit status 7 (76.977297ms)

                                                
                                                
-- stdout --
	multinode-985000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-985000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0805 16:42:14.524649    5655 out.go:291] Setting OutFile to fd 1 ...
	I0805 16:42:14.524913    5655 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:42:14.524922    5655 out.go:304] Setting ErrFile to fd 2...
	I0805 16:42:14.524927    5655 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0805 16:42:14.525095    5655 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19373-1122/.minikube/bin
	I0805 16:42:14.525270    5655 out.go:298] Setting JSON to false
	I0805 16:42:14.525292    5655 mustload.go:65] Loading cluster: multinode-985000
	I0805 16:42:14.525334    5655 notify.go:220] Checking for updates...
	I0805 16:42:14.525584    5655 config.go:182] Loaded profile config "multinode-985000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0805 16:42:14.525600    5655 status.go:255] checking status of multinode-985000 ...
	I0805 16:42:14.525968    5655 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:42:14.526014    5655 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:42:14.535096    5655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53265
	I0805 16:42:14.535532    5655 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:42:14.535946    5655 main.go:141] libmachine: Using API Version  1
	I0805 16:42:14.535957    5655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:42:14.536148    5655 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:42:14.536266    5655 main.go:141] libmachine: (multinode-985000) Calling .GetState
	I0805 16:42:14.536357    5655 main.go:141] libmachine: (multinode-985000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:42:14.536449    5655 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid from json: 5533
	I0805 16:42:14.537328    5655 main.go:141] libmachine: (multinode-985000) DBG | hyperkit pid 5533 missing from process table
	I0805 16:42:14.537352    5655 status.go:330] multinode-985000 host status = "Stopped" (err=<nil>)
	I0805 16:42:14.537363    5655 status.go:343] host is not running, skipping remaining checks
	I0805 16:42:14.537369    5655 status.go:257] multinode-985000 status: &{Name:multinode-985000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0805 16:42:14.537385    5655 status.go:255] checking status of multinode-985000-m02 ...
	I0805 16:42:14.537616    5655 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0805 16:42:14.537648    5655 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0805 16:42:14.545734    5655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53267
	I0805 16:42:14.546046    5655 main.go:141] libmachine: () Calling .GetVersion
	I0805 16:42:14.546387    5655 main.go:141] libmachine: Using API Version  1
	I0805 16:42:14.546404    5655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0805 16:42:14.546605    5655 main.go:141] libmachine: () Calling .GetMachineName
	I0805 16:42:14.546729    5655 main.go:141] libmachine: (multinode-985000-m02) Calling .GetState
	I0805 16:42:14.546826    5655 main.go:141] libmachine: (multinode-985000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0805 16:42:14.546916    5655 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid from json: 5546
	I0805 16:42:14.547772    5655 main.go:141] libmachine: (multinode-985000-m02) DBG | hyperkit pid 5546 missing from process table
	I0805 16:42:14.547808    5655 status.go:330] multinode-985000-m02 host status = "Stopped" (err=<nil>)
	I0805 16:42:14.547813    5655 status.go:343] host is not running, skipping remaining checks
	I0805 16:42:14.547819    5655 status.go:257] multinode-985000-m02 status: &{Name:multinode-985000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.77s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-985000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-985000-m02 --driver=hyperkit 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-985000-m02 --driver=hyperkit : exit status 14 (418.477044ms)

                                                
                                                
-- stdout --
	* [multinode-985000-m02] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-985000-m02' is duplicated with machine name 'multinode-985000-m02' in profile 'multinode-985000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-985000-m03 --driver=hyperkit 
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-985000-m03 --driver=hyperkit : (40.512523097s)
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-985000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-985000: exit status 103 (174.927898ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-985000 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p multinode-985000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-985000-m03
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-985000-m03: (3.378689416s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.54s)

                                                
                                    
x
+
TestPreload (136.54s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-212000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-212000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m12.259811082s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-212000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-212000 image pull gcr.io/k8s-minikube/busybox: (1.182663836s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-212000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-212000: (8.375573757s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-212000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
E0805 16:46:19.249138    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-212000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (49.326884375s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-212000 image list
helpers_test.go:175: Cleaning up "test-preload-212000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-212000
E0805 16:46:33.708291    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-212000: (5.242128842s)
--- PASS: TestPreload (136.54s)

                                                
                                    
x
+
TestSkaffold (111.38s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe3657463560 version
skaffold_test.go:59: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe3657463560 version: (1.844539204s)
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-862000 --memory=2600 --driver=hyperkit 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-862000 --memory=2600 --driver=hyperkit : (36.187715522s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe3657463560 run --minikube-profile skaffold-862000 --kube-context skaffold-862000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe3657463560 run --minikube-profile skaffold-862000 --kube-context skaffold-862000 --status-check=true --port-forward=false --interactive=false: (55.588511004s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-7c6dcbdb5-w9zgr" [63024b63-5331-42ba-8d38-d151de83a799] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.005018039s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-56dcd99c58-6sn7h" [46113513-5f1b-4ec5-9287-dfd693491cf0] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.005368029s
helpers_test.go:175: Cleaning up "skaffold-862000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-862000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-862000: (5.253791542s)
--- PASS: TestSkaffold (111.38s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (101.8s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.760348864 start -p running-upgrade-892000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.760348864 start -p running-upgrade-892000 --memory=2200 --vm-driver=hyperkit : (1m1.367625639s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-892000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-892000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (33.780046884s)
helpers_test.go:175: Cleaning up "running-upgrade-892000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-892000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-892000: (5.23413633s)
--- PASS: TestRunningBinaryUpgrade (101.80s)

                                                
                                    
x
+
TestKubernetesUpgrade (1364.74s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-627000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:222: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-627000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit : (52.332019987s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-627000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-627000: (8.398371694s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-627000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-627000 status --format={{.Host}}: exit status 7 (66.420001ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-627000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=hyperkit 
E0805 17:06:19.297291    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0805 17:06:50.700795    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
E0805 17:10:35.336344    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/skaffold-862000/client.crt: no such file or directory
E0805 17:11:19.298306    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0805 17:11:50.702979    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
E0805 17:11:58.389304    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/skaffold-862000/client.crt: no such file or directory
E0805 17:12:42.356150    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0805 17:15:35.412322    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/skaffold-862000/client.crt: no such file or directory
E0805 17:16:19.376086    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0805 17:16:50.777503    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-627000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=hyperkit : (10m41.927667061s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-627000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-627000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-627000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit : exit status 106 (680.63474ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-627000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-rc.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-627000
	    minikube start -p kubernetes-upgrade-627000 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6270002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-627000 --kubernetes-version=v1.31.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-627000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=hyperkit 
E0805 17:19:53.840357    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
E0805 17:20:35.417738    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/skaffold-862000/client.crt: no such file or directory
E0805 17:21:19.379188    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0805 17:21:50.782076    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
E0805 17:25:35.422847    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/skaffold-862000/client.crt: no such file or directory
E0805 17:26:19.384320    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
E0805 17:26:50.788449    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-627000 --memory=2200 --kubernetes-version=v1.31.0-rc.0 --alsologtostderr -v=1 --driver=hyperkit : (10m56.023445315s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-627000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-627000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-627000: (5.26549941s)
--- PASS: TestKubernetesUpgrade (1364.74s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.49s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19373
- KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1839071528/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1839071528/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1839071528/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1839071528/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.49s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (7.07s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19373
- KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1764231517/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1764231517/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1764231517/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1764231517/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (7.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (142.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.1831931666 start -p stopped-upgrade-496000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.1831931666 start -p stopped-upgrade-496000 --memory=2200 --vm-driver=hyperkit : (53.028458401s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.1831931666 -p stopped-upgrade-496000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.1831931666 -p stopped-upgrade-496000 stop: (8.196019092s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-496000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E0805 17:28:38.487526    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/skaffold-862000/client.crt: no such file or directory
E0805 17:29:22.455123    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-496000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (1m20.784031789s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (142.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-496000
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-496000: (2.582711346s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-204000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-204000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (470.516167ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-204000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19373
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19373-1122/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19373-1122/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (69.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-204000 --driver=hyperkit 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-204000 --driver=hyperkit : (1m9.526686566s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-204000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (69.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-204000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-204000 --no-kubernetes --driver=hyperkit : (15.468465177s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-204000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-204000 status -o json: exit status 2 (151.052846ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-204000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-204000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-204000: (2.317506975s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (21.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-204000 --no-kubernetes --driver=hyperkit 
E0805 17:31:19.397981    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/functional-558000/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-204000 --no-kubernetes --driver=hyperkit : (21.901177187s)
--- PASS: TestNoKubernetes/serial/Start (21.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-204000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-204000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (124.599545ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-204000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-204000: (2.395385251s)
--- PASS: TestNoKubernetes/serial/Stop (2.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (19.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-204000 --driver=hyperkit 
E0805 17:31:50.801258    1678 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19373-1122/.minikube/profiles/addons-871000/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-204000 --driver=hyperkit : (19.628745147s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (19.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-204000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-204000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (131.10641ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                    

Test skip (20/227)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-rc.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard